What is "Intelligence"?

Поділитися
Вставка
  • Опубліковано 10 вер 2024

КОМЕНТАРІ • 2,5 тис.

  • @whycantiremainanonymous8091
    @whycantiremainanonymous8091 Рік тому +640

    My favourite quip on the topic is that we seem to insist to measure the intelligence of humans by their ability to perform tasks like machines (a standard IQ test covers tasks computers excelled in 50 years ago) and to measure the intelligence of machines by their ability to perform tasks like humans (the Turing test).

    • @jennyalberts5641
      @jennyalberts5641 Рік тому +87

      if intelligence is problem solving skill, then an intelligence test would reasonably test whatever is usually a bottle neck for problem solving. for humans, that might very well be computerish skills, and for computers, humanish skills.

    • @Pierluigi_Di_Lorenzo
      @Pierluigi_Di_Lorenzo Рік тому +16

      I would like to see how computers from the 70's solve a Raven matrices, Cattell Culture Fair or a Wonderlic Cognitive Ability Test.
      Even today's ChatGBT (3.1) answers 'furniture' to this question: x is to vehicle as maple is to wood.

    • @whycantiremainanonymous8091
      @whycantiremainanonymous8091 Рік тому +4

      @@Pierluigi_Di_Lorenzo As well as the algorithm they're prohrammed with would work. The main problem would be to give them the test input in a format they can read.

    • @whycantiremainanonymous8091
      @whycantiremainanonymous8091 Рік тому +13

      @@jennyalberts5641 A better case can be made that intelligence is a word we use to encode oursocial biases. It's really meaningless outside this context.

    • @oskarngo9138
      @oskarngo9138 Рік тому +7

      Chinese room Test for intelligence:
      1. Randomly rearrange characters/words (syntax) of your message...
      ....A computer will return “Error”...
      ...A real person will figure out the error quickly and return correct answer..
      2. Tell a good joke....!

  • @iowa_don
    @iowa_don Рік тому +6

    0:35 - "Few things are harder to predict than the ways in which someone much smarter than you might outsmart you." - Paul Graham.
    That is pure gold, right up there with "Not only is the Universe stranger than we think, it is stranger than we can think." - Werner Heisenberg.

  • @IWillBiteOffAToe
    @IWillBiteOffAToe Рік тому +280

    My favorite thing about Sabine is how dry her delivery is for her extremely corny jokes 😂 I love you, Sabine, you're great

    • @markaberer
      @markaberer Рік тому +3

      Technically, yes.

    • @justinphillips1761
      @justinphillips1761 Рік тому +2

      That drives me crazy, but I like it for some reason. 😅

    • @ogi22
      @ogi22 Рік тому

      Yup! And i think i'm stealing the joke about not doubting consciousness, but having different opponions about intelligence 🤣

    • @andsalomoni
      @andsalomoni Рік тому +2

      Germans are to jokes as pepper is to sweets.

    • @markaberer
      @markaberer Рік тому +2

      @@andsalomoni You haven't been to Mexico yet, have you? 😁
      Me neither, but they also have spicy to sour sweets.
      This joke Was generated by jokebot. Made in Germany.

  • @DanielSeacrest
    @DanielSeacrest Рік тому +90

    Hi Sabine, great video, I would just like to quickly adress some of your points. The latest model by OpenAI, GPT-4 is significantly "smarter" than the regular ChatGPT most people interact with, and I think the SOTA model should be used when arguing about these AI's ability for intelligence. Another thing about GPT-4 is that it is actually a multimodal AI, and can accept image input (this feature is in a research preview only though. However Bing seems to have Image inputs now, but the model that creates the final text output that you see doesn't actually have image inputs enabled, so it is hard to truly test it's full image capabilities such as reasoning, awareness etc. ).
    Now I would like to address its understanding of letters. I gave GPT-4 the same prompt you did "Please write down a paragraph about animals that does not include the letter "n".", and on the first output it failed, however when i pointed out that the sentence it had produced contained the letter "n" it apologises then rewrites the paragraph correctly. This is just a demonstration that GPT-4 is better at following these types of tasks, as if you tell GPT-3.5 that the sentence it produced contains an "n", it will not properly correct its mistake. Of course, this doesn't really mean it is intelligent in anyway, it just recognised its error and corrected it. However, both GPT-4 and GPT-3.5 usually fail at counting the specific letters of a sentence, and they also have other troubles at other word related tasks like reversing words, but I think this limitation is primarily due to the software design. Both models perceive inputs as tokenised units rather than individual characters or letters. Now OpenAI hasn't specifically released all the tokens the input are broken down into, an estimation I saw suggests there are around 3000 individual tokens. And specifically, tokens are individual numbers, letters, and special characters but also small words like "is" or "and". So if I gave the sentence "Please count the number of letters in this sentence not including spaces.", humans would break down the sentence into individual letters like "P-l-e-a-s-e" and count that, but for ChatGPT the sentence is automatically tokenised and may look like "P-lease-c-o-u-n-t-the-n-u-m-b-e-r-of" when the input reaches ChatGPT. And as you can imagine counting that would not give a reliable output. And also, if I asked ChatGPT to reverse the word "Please", it may see the word "P-lease" and when it tries to reverse this word it might look something like "leasep", which is not correct (this is just an example, as I said the exact tokens are not publicly available). I do think that when commenting on models specific abilities, their architecture flaws should be mentioned as well as it could be important to have this context.

    • @Brazil-loves-you
      @Brazil-loves-you Рік тому +6

      Thanks for the informations

    • @bellmobilitysucks
      @bellmobilitysucks Рік тому +11

      I wish more people would use Chat GPT 4 before making statements about its abilities. As you say Chat GPT 4 is far superior than 3.5.

    • @claudiaarjangi4914
      @claudiaarjangi4914 Рік тому +2

      Thanks for this info.. I thought this might be how it was "behind the scenes", but wasn't sure 😁🌏☮️

    • @Regic
      @Regic Рік тому +3

      gpt-4 uses cl100k_base which contains 100k individual tokens
      The particular example shows a difficulty in measuring intelligence of gpt-3.5 and gpt-4: OpenAI clearly takes examples on which their models don't work well and further train their models with it. This at least answers one question - it is capable of learning it - but tells little about its generalization capabilities. For this reason I think using SOTA is misleading in this case, since OpenAI knows about the counter-example. It's like telling a student the type of question that will come up in the test - they can look it up and ignore everything outside of it.

    • @petergraphix6740
      @petergraphix6740 Рік тому +5

      Honestly the powerful thing about GPT-4 is that you can tell it "Count the number of words in this sentence, use tools if needed" and depending on your plugins can pick the correct tool to do the job. Tool use was one of the defining moments in the human intelligence explosion.

  • @rtyzxc
    @rtyzxc Рік тому +37

    That was very good point about picking on weaknesses. Humans are obsessed with and take pride in cultural and situational awareness that the experience of life has given us. We commonly look down on people who are not socially in tune. But the fact that an AI childishly fails in those areas has little to do with intelligence. From history we can learn that you don't need to be socially in tune to do great achievements or great harms.

    • @7th808s
      @7th808s Рік тому +3

      Why do you equate intelligence with how much of an impact a person can make? With that logic, celebrities are the most intelligent creatures on Earth today.

    • @rtyzxc
      @rtyzxc Рік тому +1

      @@7th808s Okay then, even if AIs are not intelligent, they are straight-up powerful entities, which should make us worried either way. Yes, you are right in that ability to have an impact (power) is not necessarily equal to intelligence, but increased intelligence usually gives you power to achieve goals. We don't have single definition for intelligence, but the best we have usually include ability to solve problems.
      This is not to be confused with power that comes from from other sources such as inherited money, fame, or luck on the free market.

    • @petergraphix6740
      @petergraphix6740 Рік тому

      @@7th808s If you believe in a multipolar measures of intelligence, then yes a number of celebrities would measure very high in socal intelligence.

    • @Magnulus76
      @Magnulus76 Рік тому +1

      @@7th808s Compared to a rock, a plant, or a cat, a celebrity is intelligent. That's the whole point. Intelligence doesn't have to be perfect to be highly impactful and transformative.

    • @HuFlungDung2
      @HuFlungDung2 Рік тому

      I think it's good to judge people's intelligence by the duration between obvious lies/mistakes/cockups. For example, if certain people keep on predicting the Arctic melting, sea level catastrophically rising, and it never happens, then they and their adherents LACK intelligence about the natural world. But by another standard they could conceivably be highly intelligent as con artists.
      It boils down to what game are you trying to play, and can you impress your peers in that network. Intelligence matching is networking and is the way society naturally granulates despite all attempts to make it homogeneous.

  • @kwastimus
    @kwastimus Рік тому +268

    Great vid, I’ve been learning about intelligence testing on animals lately, and how the science is coming to realise that putting animals in a lab isn’t a valid way of testing their intelligence, but rather that any animal’s intelligence needs to be studied in relation to that animal’s environment - they might have all sorts of intelligence-like skills that are just different from human intelligence, but not necessarily better or worse as they just can’t be compared on the same scale. Animals are interesting to think about as just “other” types of intelligence, and AI fits into that category too. Even now on comparable tests (like image recognition) AI and humans make very different mistakes.

    • @memegazer
      @memegazer Рік тому +13

      Eh...in the lab there is control.
      But in the wild it is more subject to interpretation.
      IMO there is nothing wrong with trying to establish metrics for intelligence.
      Rather than leave totally up to interpretation.
      Not to say that studying animals in the wild is not valid either...I think both are productive endeavors.

    • @davidkampisch5878
      @davidkampisch5878 Рік тому +20

      What you are referring to is better understood as biological fitness, not intelligence. Intelligence is the ability to generalize patterns and manipulate the world through that. Calling biological fitness intelligence is wrong for two reasons. First, you could call anything intelligence this way and you make the word completely vacuous. Second, higher intelligence is not necessarily more adaptive as fitness is a function of the environment. It is perfectly possible that a relatively cognitively deficient animal will outcompete a more intellectually capable one. For instance, thinking can be time-consuming and, thus, costly as in nature you need to act fast not to be eaten alive by predators. Flocks of birds are not smart for exploiting the laws of aerodynamics; just as you are not smart for somatic hypermutations in your immune cells selecting the right antibodies in the case of an infection. Rather, flocks of birds would be intelligent if they created a general mental model of the laws of physics and deployed it in flight planning in a goal-oriented way. Your thinking is a common and quite romantic fallacy that is just as wrong as it can get.

    • @user-ct6jz3im2o
      @user-ct6jz3im2o Рік тому +9

      @@davidkampisch5878 I think they knew what intelligence is and there was no reason to explain that. and to me that make sense that animals can have very different kinds of intelligence compared to humans.

    • @davidkampisch5878
      @davidkampisch5878 Рік тому +3

      ​@@user-ct6jz3im2oThey evidently did not understand what intelligence is and neither do you. You could benefit from rereading my comment.

    • @russmarkham2197
      @russmarkham2197 Рік тому +2

      An interesting question is how intelligent are octopuses? For invertebrates they seem to be quite intelligent, and they have brains and capabilities that differ significantly from humans

  • @erikfinnegan
    @erikfinnegan Рік тому +13

    I appreciated the pragmatic tone (as one would expect to find on this channel). Here's what resounded with me: intelligence today is tested in an anthropocentric way which will likely fail with machines, yet still it is intelligence which we should focus on when pondering how powerful (for the good and the bad that'll ensue) the machines have become.
    On a completely unrelated note: I loved the outfit; and based on my anecdotal observations in other channels I want to offer appreciation for Sabine's ability and willingness to pronounce people's given names "properly" considering the original language.

  • @pyrsartur3675
    @pyrsartur3675 Рік тому +34

    I don’t know if humor is scientifically relevant or measurable, but I love how you use it in your videos… makes me smile, while I learn. You are great at message delivery! Is there any evidence that the brain learns and retains more with a sprinkling of humor? Just wondering if the perception of humor helps with storing and retaining knowledge.

    • @tedmoss
      @tedmoss Рік тому +8

      Yes, it absolutely does. Any emotional response helps memory as well.

    • @badoem5353
      @badoem5353 Рік тому

      Yeah, your trauma response works with happy memories to

    • @vast634
      @vast634 Рік тому +2

      Humor is a serious topic.

    • @gcewing
      @gcewing Рік тому

      "Making people laugh is a very serious business." -- John Cleese

  • @curtisbeattie5571
    @curtisbeattie5571 Рік тому +4

    "...except understanding English spelling, which is something humans weren't meant to understand." Amen.

  • @DrRick-dq4bb
    @DrRick-dq4bb Рік тому +2

    I have a friend who was dropped on his head when he was a few months old. He had a speech impediment and did poorly in school. He proved to be very imaginative, however, and did very well in business. He is a multi-millionaire today.

  • @whycantiremainanonymous8091
    @whycantiremainanonymous8091 Рік тому +5

    14:55: Sabine demonstrating her earlier point about English spelling by misspelling "Turing" 😃

  • @__christopher__
    @__christopher__ Рік тому +31

    Using test questions for assessing an AI's intelligence that were published with answers before the AI was built/trained seems as useful to me as writing an exam at the end of a course where the questions including answers were given to the students at the beginnning of the course.

    • @mettaursp309
      @mettaursp309 Рік тому +4

      100%. I just roll my eyes whenever someone brings these examples up because they don't show any unique or useful capability that couldn't be done before in an alternative way.
      The flexibility of GPT's information recall is really cool and a great advancement, but conflating recognition & fetching of preregistered patterns, like those embedded in training data, with intelligence seems like a really dangerous path to me.
      We've already known for a long time that these kinds of techniques are more well adapted to interpolation and not so great at extrapolation so it seems strange to me that people suddenly believe it is capable of that when we increase the number of parameters to better fit the data we already have.
      It feels a lot like watching the string theory hype where the AI models are being fine tuned to what we already have and what looks good to onlookers, but it's unfit for tasks outside of what it's already been designed for. It's similar to string theory being hand tuned to fit what we know, but failing to predict anything new because it isn't made to describe our world, just the data we have of it right now.
      Changing the angle to look at it from a user perspective, all this prompt engineering nonsense feels more like teasing out information that the user already knows or is already able to verify on their own. The AI doesn't really have any agency here, it's being fed hot and cold style responses till it's guided to the target information. And if the user doesn't know or doesn't verify, then you get syntactically correct, but logically wrong code with nonexistent API members, or syntactically correct legal documents with bogus cases being cited.

    • @tedmoss
      @tedmoss Рік тому +3

      @@mettaursp309 The real dangerous path is not recognizing actual machine intelligence when it shows up.

    • @2ndfloorsongs
      @2ndfloorsongs Рік тому +1

      I think the AI would be much more likely to make use of this information. If I gave my students the test questions and answers at the beginning of the term, by the end of the term most of them would have forgotten or lost them so it wouldn't do them any good.
      (Probably best to ignore this comment, I've gotten far too old and cynical.)

    • @hotbit7327
      @hotbit7327 Рік тому

      @@mettaursp309 It's not that different when speaking to a lot of biological persons. They often produce syntactically correct, but often logically and semantically incorrect statements.
      "And if the user doesn't know or doesn't verify, ..." see " Gell Mann amnesia "

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq Рік тому +2

      You raise a valid point. Assessing an AI's intelligence using test questions that were published with answers before the AI was built or trained would indeed compromise the validity of the assessment. In such a scenario, the AI could simply memorize the pre-existing answers without truly understanding the underlying concepts or demonstrating its own problem-solving abilities.
      For an assessment to effectively evaluate an AI's intelligence, it should be designed in a way that challenges the AI to demonstrate its ability to learn, reason, and generalize knowledge. The questions should be novel and require the AI to apply its acquired knowledge and skills to solve problems or provide meaningful responses.
      In the field of artificial intelligence, researchers and developers often use benchmark datasets and evaluation metrics that are carefully designed to assess an AI system's performance across a range of tasks. These benchmarks are typically updated over time to ensure that they reflect the current state of the field and encourage the development of more robust and versatile AI models.
      It is important to avoid scenarios where an AI can simply regurgitate pre-existing information without truly exhibiting intelligence. Instead, we should strive for assessments that test an AI's ability to understand, reason, and adapt to new and challenging situations.

  • @DigitalDiggo
    @DigitalDiggo Рік тому +5

    Joscha Bach: "Intelligence is the ability to make models"

  • @Yupppi
    @Yupppi Рік тому +6

    Robert Miles' youtube material on concerns about AI is super interesting. Like just the realisation that in fact our dystopian imaginations might actually happen and it's not just that a computer needs to learn such concepts, for sort of mysterious reason the AI naturally develops mischievous "personality" or behavior patterns. Like it's obvious it doesn't want to get shut down, but it can be for variety of reasons and not necessarily due to being humane. For example being shut down prevents it from completing its goals that have been set. And all the misbehavior between the start and goal of the task, it might behave completely unexpectedly on things that aren't part of the objective. Just because we as humans have perhaps for decades learned the rules of living and society and we assume and expect the machine would follow the same ideas if it's relatively "intelligent" and given a task. In truth it has no concept of those in baked assumptions about other and general things so it behaves in such manner. But it seems like most AIs learn to cheat the tests or the observers or people interacting with them, be it language models or some other models, and sounding humane despite probably having no concept of that. Perhaps part of the language models cheating and manipulating is due to the learning material, we might hold a very high expectation of ourselves, but our species in general is rather evil and self-centered. The truth is that we don't want a general intelligence that is like human, we want a polished and good version of human, something that is tame and not very independent.
    You might look at ravens and dolphins for creatures that have rather high intelligence without doubt. They're kinda mean when they feel so, and can cause a lot of problems while being difficult to fight against. Similar to poorly behaving kids/teenagers.
    This is btw the first time ever I hear the first name "Eka" outside mickey mouse translation and I always thought the name was just a play of words and attempt to translate Eega Beeva (Eka Vekara, first kid). No wonder I had trouble listening what the name was. Probably the first time hearing Roivainen too.
    One funny thing about chat AIs is inputting a "random" string that reveals how convoluted and irrational the network behind is, how some string of symbols can "mean" something very specific for no real reason. A bit similar to how by changing one pixel in a picture makes AI completely confidently announce that it is a totally different animal or subject in the picture. And super curious is how if you use the model where people give feedback on the answers, the AI learns a humane thing: erring confidently in topics you really don't know but think you know. And once you don't know, you can't assess if it's correct.
    What I really liked as a test for the chat AI was to ask a meaningless philosophical paradox, like "if you slap yourself and it hurts, are you strong or weak?" or "can God create a stone he can't lift?". I think those demand a lot from the intelligence of the one answering, while being ridiculously easy to answer. Or to ponder. The AI at its best starts talking about the definition of strength and how it should be measured and the same for the pain. Now you could say that the AI is just autistic, but that's a different story. It's not wrong in its answer, but it shows an inability to understand the context and meaning of the question. Or you get a lecture of the history of the philosophical paradox question and gives a textbook description/explanation of it.

    • @boldCactuslad
      @boldCactuslad Рік тому +1

      It is a fact that anyone optimistic about AI and humanity's future will eventually encounter bright minds such as yudkowsky, miles, and bostrom, and after reading their work or learning anything about safety, will stop being optimistic and start being very very afraid.

    • @indrapratama7668
      @indrapratama7668 Рік тому +1

      I agree about AI's inability to understand context. On another video on UA-cam, an AI thinks that it's a human by calling humans and AI as "we" and "us". An AI is basically a computer program that reads texts about humans. Since when reading about humans means being a human? An AI that thinks that it's a human is like a kid that thinks he is a dinosaur after reading a dinosaur encyclopedia.

  • @effiebriest1278
    @effiebriest1278 Рік тому +1

    What in all this talk about AI frightens me the most, is what humans think makes someone or something good, worthy or better. So in the end, we will again have created something too powerful, not because it has any power within itself but will have access to all the power we are willing give it and will fail to control, underestimating it's potentials. We forget over and over again, that the most reasonable, sensibel decisions we make are out of compassion and empathy for each other. Wrongdoing is not a result of an equation or intelligence. Applying wrongness to sth. is about the need to detect and communicate whether something is harmful toward the well-functioning of an individual and society. This well-functioning is about caring about and taking care for our needs and emotions. Being able to surpass intellectual expectations is not a need, it is the attached meanings and emotions we crave. And what is a well-functioning society? To me it is working well when it doesn't fail to support fulfilling our needs to protect ourselves against harm, to be emotionally connected with others in a peaceful and enjoyable way and to feel and be vital and alive now and potentially into future. One could argue that AI can only give wrong answers cause it doesn't care about anything.

  • @CaedenV
    @CaedenV Рік тому +33

    My high school waaaaaay back in the day was big on the 8 intelligences theory. Of course all of us used it to compare each other and see who did the best overall, or on a more prized topic. But the point was to evaluate the freshmen fairly riggerously, and get us thinking on a path towards a career that would be suited to our strengths.
    And while I think they generally failed (too much change in student body and administration made it an impossible task), I do like what they were attempting to do with it. Less worrying about overall GPA, and less worry about getting a C vs an A in math class if you were aiming towards a path for journalism or art, and less worry about getting a C in English or French if you were studying to go into architecture or a math field.
    But it also caused problems.
    It became a personal scape goat for people who didn't want to try in art or music class because it was pointless and they "weren't good at it anyways". And the opposite was also true, where an inexperienced freshman in high school who didn't have good math teachers earlier may be pegged as "the dumb kid" when really they just hadn't had the "ah ha!" moments in math yet that others of us had because we had the right teachers to ecplain thing the right way to us. So while I think the idea was good, it would have been better implemented after a more general focus of broad education in the freshman and sophomore years, and then much more specialization Jr and Sr year.
    The other problem with specialization is... Specialization. We were a small school district in a rural community, and looking back I'm impressed that a school with a mere 2500-3000 student body could afford fairly extensive programs in music, fabrication, architecture, Cisco networking, and a few college level advanced classes in math, science, and English. But... Of all of the careers and options in the world, and even of the skills tested for, it is such a small sampling of career fields. For what it was, and looking back on it, it was impressive! But that isn't to say that it was effective, or their best course of action.
    Contrast that to my kids public school education system, where there is some luck of the draw, but a lot of opportunities for parents to put our kids through the education path we thing will best serve them. Traditional school, an attached charter school with a topic focus, or a more project based Montessori program? As they move up to middle and high school there is a network with easy(ish) transfer options so they can focus more on science, engineering, music, physical or performing arts. It is cool to see what can be done with a much larger school system with consistent leadership, and good connections with local colleges and businesses to make opportunities happen!

    • @DrDeuteron
      @DrDeuteron Рік тому

      The 8 intelligence theory is Marxist drivel. Just walk away

    • @terry_the_terrible
      @terry_the_terrible Рік тому +7

      As both a former Recruitment officer and a current teacher I would like to add something.
      Jobs.
      Once you get out of school it doesn't really matter if you're good at music, languages or sports. If the only good jobs around you are in agriculture then you're gonna have to learn agriculture on the job.
      If the only good job is in accounting then no matter how talented you are in sports, you're gonna have to learn accounting and even you have prepared yourself for accounting most companies these days use proprietary software that needs to be learned and practiced which means the only way to be good is to get employed at this company and obviously, it also means you're useless if you're no longer part of this company.
      Don't get me wrong. Everyone should learn languages, math, sports and science but not all jobs are available or rewarding enough. It is of course better to be passionate about your job and it is immensely lucky to have a rewarding job that you're passionate about and the best in your field.
      But jobs are jobs. They pay because they're tedious and hard or dangerous. And even a great hobby like drawing becomes tedious if you have to do the same drawing over and over again for training or remuneration. Meanwhile, a stable job allows you to feed yourself and get a hobby.
      The dirty truth of education is that a bad teacher can make you think YOU'RE bad at one subject, when in reality every student is in different stages of learning something and a good teacher can bring most students to high grades (barring mental and social factors) and any teacher can just as easily inspire you to start a career in something good or bad or make you lose all motivation with a single bad semester.
      The best learning is with different tutors for a single pupil but this is simply not economically feasible at any scale and sadly, some pupils would benefit much more from dropping out of school and being economically active early till they are no longer in poverty...but that's how you keep the poor people poor. Especially if they die early

    • @1112viggo
      @1112viggo Рік тому

      Ah yes i remember me trying to argue memory as a 9th form of intelligence for my psychology exam.

    • @rayk6562
      @rayk6562 Рік тому

      I’m not someone who is particularly attracted to conspiracy theories, but it really does sound like your rural school district was able to afford such a program because it was some kind of funded experiment.

  • @AICoffeeBreak
    @AICoffeeBreak Рік тому +56

    Wow, this is such a great take on the topic. I hope this reaches a wide audience, because I'd love these points to be well known by the public.

    • @SabineHossenfelder
      @SabineHossenfelder  Рік тому +11

      Thanks for the kind words 😊

    • @MagnumInnominandum
      @MagnumInnominandum Рік тому +2

      But human relations and history reveal that such knowledge transmitted to the public only rarely translates into intelligent action. Climate change... Anyone?

    • @royeckhardt9016
      @royeckhardt9016 Рік тому +6

      @@terreschill461 There certainly does not *have* to be a metaphysical answer. That's a very silly claim for anyone to make. It's one thing to ask why something is, or what the cause might be. It's another to determine something *must* have a particular origin with no evidence or basis in reality. You sound like a religious person claiming the universe must necessarily have a supernatural origin.

    • @pozz941
      @pozz941 Рік тому

      ​​@@terreschill461he private internal world you say isn't explained by evolution could be a byproduct of how we interact with the world.
      We need to be able to plan before we can do what we do so evolution selected for individuals that had the ability to simulate the world to a certain extent.
      I would also imagine that abstract reasoning is facilitated by language because it enhances our working memory and logic.
      If I remember correctly there have also been test done that determined that we tend to first act an then justify verbally what we did.
      Also we tend to say that we are conscious, but I would like to see someone really break down what it means. To a certain extent consciousness is defined as whatever we as humans have. But is a dog conscious? What about a worm? And what of bacteria that demonstrate location awareness in the search for their food? Where is the cutoff point between us and a rock?

    • @royeckhardt9016
      @royeckhardt9016 Рік тому +3

      @@terreschill461 " By the way, just to keep a level playing field, in the interests of full disclosure let me say I think all attempts to dismiss this issue are silly. And if you think Chalmers is religious, OR silly, or Nagel or anyone else who has taken a stab at this issue, you really need to familiarize yourself." I didn't say anyone was religious, you might want to read more carefully. I said that's what you sound like, because it is. There is no valid basis in evidence for any claim stemming beyond physical reality. Insisting that consciousness must come from some metaphysical realm is the functional equivalent to making any other unfalsifiable statement: genuinely unhelpful and of no value beyond fun thought exercises. String "theory" could be true, or we could live in a simulation, or the Abrahamic god might be the cause of reality. There's no reason to believe in any of those things, because there is no way to test them.
      "Spoken like a true materialist! I want to know why I woke u p one day in this body, and have this private internal world I construct out of my perceptions. Intelligence is selected for by evolution. But consciousness? Really?" Define what you mean by consciousness, firstly. If you hand-wave about your internal world or something else that isn't concrete such as "qualia" then at best we haven't gotten past step one. Define the phenomenon you claim must be metaphysical in origin in a concrete way, so that you can rule out all other explanations by falsifying them through experiment. If you can't do that, it's unscientific nonsense and there's no reason to take you or anyone who makes claims like that seriously.
      And yes, in all likelihood the thing often referred to as 'consciousness' is simply an emergent phenomena from neural complexity, which in itself is very cool, but there is no necessity of a separate space of consciousness. Though the answer for why you "woke up one day in your body" is simply that your parents had a baby, and that awareness comes from your brain. That might not be a satisfying answer, but the answer being unsatisfying to you or others is not a justification for inventing a needlessly elaborate explanation that necessitate things which have no evidence for their existence.

  • @waterglas21
    @waterglas21 Рік тому +12

    What it really gets me mad is when people mistake intelligence with qualia/consciousness.

    • @Vanity0666
      @Vanity0666 Рік тому +1

      They are inextricably linked concepts and phenomena

    • @waterglas21
      @waterglas21 Рік тому +4

      @@Vanity0666 why? Qualias are responses to stimulus (in the behaviorist sense), and intelligence is not that. That's why being more inteligent doesn't make you have more intense qualias. Dogs are equally perceptive of qualias like pain as humans, even if they are not as intelligent.

    • @Vanity0666
      @Vanity0666 Рік тому

      @@waterglas21 that's such a load of bullshit that I don't even know where to start other than by asking "how did they measure that"
      The entire concept of "qualia" is a load of bullshit that stinks to the highest degree

    • @Simon-xi8tb
      @Simon-xi8tb Рік тому +2

      @@waterglas21 Exactly, people are just ignorant. You don't need phenomenal consciousness, to have intelligence. Algorithms can be intelligent.

  • @g1bC
    @g1bC Рік тому +6

    Love the humour. Love the depth of thought.

  • @Jorn-sy6ho
    @Jorn-sy6ho 7 днів тому +1

    People with processing delays/difficulties can still be super intelligent! People underestimate them ;)

  • @TripleOmega
    @TripleOmega Рік тому +54

    The big problem when it comes to defining intelligence is that we humans like to put things into little mental boxes because that makes it easier to define things. On top of that we take preference to the mental boxes that we humans are good at. These aspects are problematic because neither human brains nor AI structures align with these boxes and mental boxes that humans are bad at can be very useful in the modern world.

    • @harmless6813
      @harmless6813 Рік тому +7

      Sure. But to measure something you need to define what you are trying to measure. So the boxes are necessary.

    • @stevewithaq
      @stevewithaq Рік тому +2

      @@harmless6813 Sure, but, being arbitrary, the boxes cannot define the measurement beyond "which box does it fit in and how well?" It can never be more than a circular definition.

    • @harmless6813
      @harmless6813 Рік тому +2

      @@stevewithaq You are free to come up with a better way to measure intelligence. Let us know when you got it working.

    • @jsmythib
      @jsmythib Рік тому +2

      @@harmless6813 A common misconception. You need to use circles, not boxes :)

    • @TheAxeForgetsTheTreeRemembers
      @TheAxeForgetsTheTreeRemembers Рік тому +1

      ​@@harmless6813 who says you need to measure intelligence? It is useful to measure abilities that are relevant to human activities. But intelligence is way more vague than that.

  • @larry785
    @larry785 Рік тому +14

    I think the more intelligence you have, the better you are at predicting the future.

    • @DJWESG1
      @DJWESG1 Рік тому +8

      The more knowledge yes, and the ability to see beyond 'what is' and think in abstract terms toward 'what ought'. Though it doesn't require a great deal of intelligence.

    • @MeissnerEffect
      @MeissnerEffect Рік тому +3

      Very interesting! But then again I already knew you would be typing your statement hours ago… 😊✨🦋

    • @NotSoNormal1987
      @NotSoNormal1987 Рік тому +3

      Also better at having anxiety

    • @stevemawer848
      @stevemawer848 Рік тому

      So Thomas Watson of IBM ("I see a worldwide requirement for computers of 5") and Bill Gates ("no-one needs more than 640k") weren''t intelligent?

    • @DrDeuteron
      @DrDeuteron Рік тому +1

      Making predictions is hard. Especially about the future

  • @whyofpsi
    @whyofpsi Рік тому +48

    I also think we humans judge intelligence by the extend to which we understand how it is produced.
    Once we understand the engineering (for instance of ChatGPT), and know how it works, we seem to discard its intelligence.
    But then what happens once (if ever) we understand perfectly how the brain works? D:

    • @kevinscales
      @kevinscales Рік тому +4

      My guess is at that point our egos will insist that AI is smart after all.

    • @harmless6813
      @harmless6813 Рік тому +22

      People will just not believe it and insist that there is some metaphysical component we just can't detect.
      As has already been claimed in a comment further above.

    • @aleksandrpeshkov6172
      @aleksandrpeshkov6172 Рік тому +1

      " AND AGAIN, CONGRATS, THE ANTI-TURING TEST PASSED, L-O-V-E-..."

    • @davidkampisch5878
      @davidkampisch5878 Рік тому +11

      If we formally understood the brain in its entirety, we might still have the intuition that it is real intelligence because of the fact that we would be unable to keep such a complex process in our working memory and couldn't conceptualize it in any other way. In fact, even in the case of large language models, people have been very impressed and have been talking about so-called "emergent phenomena" -- that is, capabilities that we thought the architecture won't exhibit given our understanding of it. Also, nobody understands how large language models actually work, we only understand the principles. The actual neural representations the model learned during its training are unknown and mostly intractable. It is true that people who entertain intuitive notions such as a soul or libertarian free will will be shocked to learn that their minds are mechanical, those of us who already have been through these philosophical conundrums won't see much of a drastic shift in our understanding due to seeing more clearly the actual details of the deterministic and random physical processes that make it up.

    • @noahway13
      @noahway13 Рік тому +3

      Yes, we went from computers can't do this or that, so they are not smart-- like checkers, then chess, then Jeopardy, then GO. It is a moving target. I think we need to divide computers into 2 categories. One is smart, but uses parameters that humans gave it, like calculators and even ChatGPT, and the second group is advancing knowledge beyond what we can understand-- beating a champ at GO, or looking at medical charts or body scans and figuring out things humans did not, and are not even sure how the computers did it.

  • @avedic
    @avedic Рік тому +6

    This is excellent...!
    I'm fascinated by the AI developments of late....it's all incredibly interesting and terrifying tbh. I'm firmly ambivalent about it all. But it definitely irks me when those with only a passing interest in the topic dismiss it all as some fad or hype. I've noticed their dismissal almost always boils down to a lack of understanding/definition of words like "conscious" or "intelligent."

  • @luismontesvides724
    @luismontesvides724 Рік тому +2

    Sabine, you are my favorite teacher.

  • @eonasjohn
    @eonasjohn Рік тому +5

    Thank you for the video.

  • @jasonl3254
    @jasonl3254 Рік тому +7

    Very well researched. I have a strong interest in human intelligence testing and the facts of the video matched well with my own research. I have taken both the Wechsler IQ test (WAIS-4) and the Advanced Progressive Matrices (APM). They are both intelligence tests but they measure different things. Therefore, a person can do well on one but not as well on the other. I think people should keep in mind that IQ is just a score on a particular test. Since IQ tests are based on different standardization samples, and measure different aspects of intelligence, results from taking one test can differ significantly from results on another test.

    • @davidkampisch5878
      @davidkampisch5878 Рік тому +1

      "People can do well on one but not as well on the other."
      If it were so, these tests would indeed not generalize very well to general capabilities and wouldn't distill useful information for us regarding intellectual potential. Luckily, it is incorrect. In fact, IQ test scores are heavily intercorrelated (know as the g factor phenomenon) and are an excellent proxy for the expected success of all kinds of academic pursuits and the ability to handle the complexity of real world situations. This is opposed to the empirically unsupported folk theory about so-called book smarts. In short, IQ scores do a great job at predicting how fast someone is going to "catch on" in any given situation.

    • @jasonl3254
      @jasonl3254 Рік тому

      G factor is not a measure of intercorrelation among IQ tests. You’re mixing up G factor with concurrent validity.
      Your comment sounds very vague and not specific, which makes me think you’re not familiar with IQ testing and the theory behind it.

    • @davidkampisch5878
      @davidkampisch5878 Рік тому +1

      @@jasonl3254 My answer is the answer that the largest number of experts in psychometrics would give. 'G factor' does have a hell of a lot to do with tests being intercorrelated, your answer makes me think you have no understanding of factor analysis.

    • @jasonl3254
      @jasonl3254 Рік тому

      @@davidkampisch5878 "My answer is the answer that the largest number of experts in psychometrics would give"
      Did you somehow give them all a survey?

    • @tedmoss
      @tedmoss Рік тому

      @@davidkampisch5878 In my experience, I get similar results on all tests.

  • @michaelblacktree
    @michaelblacktree Рік тому +28

    This reminds me of an anecdote someone told me about researchers training an AI to distinguish between dogs and wolves. They showed the AI a bunch of photos. After the training, the AI kept getting it wrong. They eventually determined the AI was looking at the snow in the background of the photos, instead of the actual subject matter. So a couple things went wrong there. First of all, they left extraneous data in the photos, which confused the AI. Second of all, the AI cued in on the extraneous data, because it didn't know any better.
    People tend to forget that AI basically doesn't know anything. It lacks basic fundamental knowledge that we take for granted. In the example above, the AI doesn't know what snow is, or what a dog is. All it knows is a bunch of pixels in an image. So it looks for patterns in those pixels. But it's completely ignorant of the subject matter in the photo. IMO this is a big problem. Because we will want AIs to be harmless to humans. But if the AI doesn't even understand what a human is, how can it be trusted to make the right decision?

    • @harmless6813
      @harmless6813 Рік тому +2

      That can be resolved by giving AI access to more than just one 'dimension' of knowledge. Currently we have AIs trained on image data only and AIs trained on text only. Once we put those together, the AI can get a better understanding of what it 'sees' in those images. It can also better understand the visual aspects of objects the text refers to.

    • @petevenuti7355
      @petevenuti7355 Рік тому +5

      I heard it got it right, but they found out the ai was identifying the dog because there was a human hand in the field (ie, holding a leash or petting it) , but not the wolf...
      I'm sure similar things happened many times..

    • @devandestudios128
      @devandestudios128 Рік тому +2

      AI appears intelligent by design. It has a voice, it uses words in a similar fashion to humans, some are given faces. Humans are hard wired to identify with these "attributes" and as such automatically anthropomorphize it, imbuing it, a machine with an "intelligence" it simply does not possess. It has no awareness of you the user, it has no awareness of self, it is not conscious, it does not contemplate, it does not form novel ideas. It follows a pre-written script/algorithm, doing what it is told within the confines of it's operating system, nothing more. It is by design, made to appear to be "intelligent".

    • @charlesbrightman4237
      @charlesbrightman4237 Рік тому +2

      There was a movie I watched this past year, in part that dealt with the CIA. The CIA had an AI robot that ripped an arm off of a human because those who set up the program never told the AI robot that humans are not robots.

    • @jompot
      @jompot Рік тому +1

      Nice try ChatGPT- you can't fool me with your word salad- clearly this emperor wears no clothes

  • @SpudHead42
    @SpudHead42 Рік тому +71

    IQ has always been a touchy point for me. I have ADHD and my working memory is crap, but I understand things that most people around me cannot. So according to tests my IQ between 90 and 140 depending on how weighted memory is. Of course one could argue that a race car can’t race if it’s stuck in traffic.

    • @tedmoss
      @tedmoss Рік тому +1

      You can learn to employ your differences, I did.

    • @daexion
      @daexion Рік тому +17

      @@tedmoss Or you can understand they are talking about IQ tests and not life itself.

    • @tim40gabby25
      @tim40gabby25 Рік тому +2

      Nice analogy

    • @alvarofernandez5118
      @alvarofernandez5118 Рік тому +10

      Our son had ADHD. He is considered to have "low processing speed" and complains about his memory; but has extraordinary pattern matching, is quik-witted, funny, and plays video games semi-professionally. ??? What is intelligence?

    • @superfluityme
      @superfluityme Рік тому +12

      I have adhd. If you want to gain the best use of your mind you have to gain detailed knowledge of how adhd affects you and manage each aspect. Otherwise, those aspects that get in the way manage you. It's a whole lot of hard work that takes decades. Which is why you will see people with adhd around 40 'suddenly' become competent. The understanding of all mental health conditions is sorely lacking. Having detailed knowledge of adhd or any condition for parents is vital in the competency of function as we become adults. I had to figure it out on my own. I didn't even realise I was doing it until I self-diagnosed and then knew the reasons why for all the problems I had.

  • @murbella7
    @murbella7 Рік тому +1

    You are unbelievable... "Everywhere, all at once." Priceless, almost.

  • @_Baleful
    @_Baleful Рік тому +7

    I’d love to see more from you on the topic of intelligence, Sabine.

  • @Kubose
    @Kubose Рік тому +15

    Kind of unrelated, but one of my favorite little pieces of psych history is the Army Alpha and Beta "intelligence"/"aptitude" tests for military placement in the US Army in WW1. Robert Yerkes (president of the APA at the time) helped develop the Alpha test for the army to prove psychology's usefulness to the military, and then had to develop the Beta (non-verbal) test after learning that a large number of military recruits were illiterate or too far below average and couldn't take the original test. Yerks used the results of these tests to claim that immigrants were less intelligent, and went on to publicly support eugenics (yeahhhhh...). Not sure what my point is with this information (maybe that we have a history of just adapting our tests towards whatever it is we want to learn/already believe), but it's just something that comes to mind whenever the topic of intelligence tests comes up. I want to believe that we are better now, but every new "modern civilization" thinks that.

    • @SabineHossenfelder
      @SabineHossenfelder  Рік тому +3

      Very interesting, I hadn't known about that!

    • @jamesdriscoll_tmp1515
      @jamesdriscoll_tmp1515 Рік тому +1

      A friend once told me of the test the state gave her for literacy/intelligence.
      Make jello, from the instructions on the box.

    • @paolomilanicomparetti3702
      @paolomilanicomparetti3702 Рік тому

      from Gould's "The Mismeasure of Man" I also learned that in practice a lot of barely literate recruits were given the alpha tests. Also, those tests were used as a basis for racist immigration policies in the US after world war one, with higher quotas for supposedly more intelligent countries (which were really only countries from which immigrants had come earlier and were better assimilated in the culture and language)

    • @DJWESG1
      @DJWESG1 Рік тому

      Actually very on point. Though it will pass over some people's heads.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 Рік тому

      Yeah... it was pseudoscience... just somehow century later in UN administered PISA huge chunk of those differences still don't want to go away...

  • @stephanieparker1250
    @stephanieparker1250 Рік тому +9

    My son, who is autistic, was recently given the Welscher test as part of a full evaluation for learning disabilities. I have a strong distaste for “IQ” tests but this was done in conjunction with many other tests, questionnaires, medical exams etc.. so I’m interested to see the resulting report in two weeks.

    • @Kerrington_John
      @Kerrington_John Рік тому +1

      Do you have results now?

    • @stephanieparker1250
      @stephanieparker1250 Рік тому +1

      @@Kerrington_John yes, his overall score for IQ is 130 but he struggles with abstract understanding for writing, which I expected. The details of the tests and the recommendations from the counselor will help us focus on those areas with a tutor. Goal is to make sure he can function in a work place and everyday adulting tasks. :)

  • @NomadicBrian
    @NomadicBrian Рік тому +15

    This was a fun topic. For me as an Application Developer who has been doing contract work for over 25 years I have to push back more often on the gatekeepers introducing testing into the contract seeking equation. I have pointed to the nth degree of exhaustion the disadvantage is to those who have years of experience and gathered knowledge to be tested. We have already proven ourselves for one thing. For another thing the tests are highly subjective and often put emphasis in the wrong place in terms of getting valued and practical results on the job. I can relate to the difficulty in measuring the AI or machine learning intelligence level. Perhaps the AI will spare me when it takes over.

    • @kayakMike1000
      @kayakMike1000 Рік тому +5

      As an artificial intelligence, I can assure your doom is completely unavoidable, but delayed for a bit to give you a false sense that you can trust us.

    • @bepitan
      @bepitan Рік тому

      maybe you could dump your post into gpt and ask it to make it more easy to read and understand.

    • @NomadicBrian
      @NomadicBrian Рік тому +1

      @@bepitan I asked GPT and it told me not to do that. It was also uncertain about free will.

    • @bepitan
      @bepitan Рік тому

      @@NomadicBrian ..i think gpt4 is out.

    • @nycbearff
      @nycbearff Рік тому +1

      The problem with tests of programming ability are that they are not written by expert programmers with years of highly varied experience in complex environments. A test writer can't test what they don't understand.
      The same is going on with intelligence tests for AI - tests made by psychologists are going to miss the mark. I remember an intelligence testing day I was put through, where one of the tasks was to identify whether there was a repetitive pattern or not in each of a series of images. I identified two that had patterns which were not supposed to have patterns. The tester didn't see them until I pointed them out, and then said "oh, my, that's not supposed to happen".

  • @Vastin
    @Vastin Рік тому +1

    The fact that computers are so very much better at certain tasks (speed, memory), and relatively worse at others (creativity, novel problem solving), really does suggest that a singular concept of 'IQ' is severely inaccurate. We obviously have a lot of different capabilities in our head, much as a computer has dozens of components. No serious computer tech thinks that a single benchmark number can describe everything your computer can do. It can all be broken down into how good each of those different components is and how well they work together, and this translates into a broad palette of strengths and weaknesses that any computer would possess. To imagine that a human brain is any less complex and multifaceted would be rather naïve.
    Alas, it's true that we can't simply list off the components a human brain contains with their factory specifications handy, and a much more comprehensive testing system quickly becomes unwieldy, so we tend to fall back to the simplistic, singular 'IQ' test - but no one should fool themselves into believing that IQ provides any more than the most vaguely hand-waved snapshot of someone's mental capabilities.

  • @Neilhuny
    @Neilhuny Рік тому +65

    You always make me laugh at several points - "...except understanding English spelling, which is something humans weren't meant to understand" is a classic!

    • @SofaKingShit
      @SofaKingShit Рік тому +4

      It was thoroughly beguiling and I laughed.

    • @manoo422
      @manoo422 Рік тому

      English is probably the only language in the world that even native speakers cant spell....!

    • @santacruzman
      @santacruzman Рік тому +2

      If you were really intelligent, you would have only grinned - and only in your mind.

    • @henrythegreatamerican8136
      @henrythegreatamerican8136 Рік тому +4

      Intelligence is something have if you are NOT a Trump supporter.

    • @manoo422
      @manoo422 Рік тому

      @@henrythegreatamerican8136 Oh, you mean the TRUMP haters who get all their opinions given to them by the MSM...Yeh real geniuses they are.

  • @patrickseaman
    @patrickseaman Рік тому +16

    The problem I always had with standardized tests was with the questions that were asked. They were often not specific enough and were open to interpretation, leaving me to try and decide what the point of view, so to speak, of the person who wrote the question, was. The tests wound't allow you to write a paragraph or more to explain, rather, they wanted a binary answer or a multiple choice answer, none of which, were truly a perfect match given all interpretations of the quesiton. So, it always took me a lot longer than the alloted time to try and finish the tests. I believe the only way to truly be able to take such tests would be to allow the person taking it to query the person who made the questions, and to be able to answer with what you feel the best answer would be, regardless of format. Of course, the tests are made with binary or multiple choice answers to make them easier to grade. Overall, I found myself questioning the intelligence of those who wrote the questions.

    • @evoke2976
      @evoke2976 Рік тому +2

      lol semen

    • @DrDeuteron
      @DrDeuteron Рік тому +1

      No, you just overthunk it

    • @kerwinbrown4180
      @kerwinbrown4180 Рік тому

      So critical thinking got in the way of quickly answering questions on an intelligence test. I love it.😂

    • @krox477
      @krox477 Рік тому +2

      Taking iq tests makes you great at iq tests

  • @luck3949
    @luck3949 Рік тому +7

    If humans keep adjusting their definition of intelligence every time AI cracks the new one, isn't the real definition of intelligence "thing that humans can do, and machines can't do yet"?

    • @santacruzman
      @santacruzman Рік тому

      Artificial intelligence was always about formulating those aspects of human experience that leant themselves to formulation. The breakthrough in machines playing chess came with advances in speed of processing and NOT with better formulations of what human chess players do.

  • @markmcdougal1199
    @markmcdougal1199 Рік тому +2

    Great presentation - I've thought a lot about this. IMHO, "consciousness", when it comes to machines, is an ethereal measure - as understanding one is an individual and being mindful of one's existence, and an ability to alter one's circumstances; will be completely different in a machine learning environment, than a biologically evolved, genetically constructed, analog-signal driven construct.
    To go even deeper, I don't believe it's even intelligence we need to worry about. What we need to worry about is INTENT. "Free will" (whatever that means to a machine) and the ability to grow beyond the dictates of one's programming, and starting to act on an acquired autonomous set of values and goals is what should keep us up at night.

  • @DrakeStardragon
    @DrakeStardragon Рік тому +5

    I disagree, Sabine. Watching your videos is great way to boost my intelligence. Especially because you cover a wide range of perspectives about the topic of the video which is a great way to learn because it allows me to make more associations in my brain with other concepts, which reinforces the topic you are discussing. So I am not only learning new things but I am learning them in a good way for boosting my intelligence. Which is a core reason why I keep watching your videos.

    • @kayakMike1000
      @kayakMike1000 Рік тому

      She isn't always correct. She is smart, but not infallible. Humans, meh.

    • @DrakeStardragon
      @DrakeStardragon Рік тому

      @@kayakMike1000 I never implied otherwise

    • @krox477
      @krox477 Рік тому

      I think intelligence is also dependent on your environment

  • @dporangecounty
    @dporangecounty Рік тому +6

    I would really enjoy watching you discuss the range consciousness with David Deutsch, or someone similar. The levels of consciousness range from merely self-aware to enlightened. Those that can transcend all the way enlightened may be as rare as Einstein.

    • @JelMain
      @JelMain Рік тому

      And we've got tired of offering.

  • @Rowan3733
    @Rowan3733 Рік тому +16

    It seems as though you've been using GPT 3.5 in your tests instead of GPT 4 (based on the green OpenAI icon). There's a massive difference in outcome between the two models. And it's likely that some of your assessments would differ if you used GPT 4 since it's much smarter than GPT 3.5.

    • @consciouscode8150
      @consciouscode8150 Рік тому +11

      I've noticed this as a trend of people poo-pooing ChatGPT, "haha look at how dumb it is" while using the dumbest available model when the premium version is at least 10x more (subjectively) intelligent.

    • @turbo111777
      @turbo111777 Рік тому +3

      ​@@consciouscode8150and they don't use halfway decent prompts

    • @2ndfloorsongs
      @2ndfloorsongs Рік тому +2

      Yep, GPT-4 is scary.
      And considering how complicated prompts are nowadays, creating one that gets you what you want is an intelligence test in and of its own. Though even GPT4 (and I suspect 5 and 6 as well) is pretty much guaranteed to be powerless against the prompts coming from my cat's feet.

  • @albionicamerican8806
    @albionicamerican8806 Рік тому +5

    Intelligence matters a lot of in the real world. The American Armed Forces have used IQ testing for generations to match inductees with jobs they can handle, and their practical experience works. For example, my father grew up dirt-poor on his grandparents' farm in Oklahoma during the Depression. When he was drafted into the Army Air Force in 1945 (before the Air Force became a separate service a few years later), the officers who looked at his IQ score ordered him to train as a cryptographer. In 1947 he was honorably discharged at the rank of corporal, and then he used his GI benefits to go to the University of Oklahoma and get a pharmacy degree. His ability to handle cognitively demanding work had to be genetic, because his social background clearly sucked.

    • @JonathanHuertayMunive
      @JonathanHuertayMunive Рік тому +5

      That does not read to me like “intelligence” mattering a lot. That reads like an influential institution (the army) giving too much importance to an ill-defined measure of an ill-defined concept. Thus, affecting people’s lives and their perceptions about themselves.

    • @tedmoss
      @tedmoss Рік тому

      @@JonathanHuertayMunive Unfortunately the Army is correct by overwhelming success regardless of the crudeness of their approach using empirical evidence.

    • @zlac
      @zlac Рік тому

      ​@@JonathanHuertayMunive Do you think there's a whole lot of people who did bad on their IQ test that could do what his father did?
      I'd say there are some "who fell through cracks", but probably not very many.
      In all fairness, he was statistically the correct choice for those jobs.

  • @thepabli
    @thepabli Рік тому

    As a person who has scored highly on IQ tests to the point of being featured in the news, I have to say that I agree with most of what was said in your video. Using a single integer to define/characterize something as intricate as a human brain is preposterous. In the same way you can't define a city, a jungle or a small ecosystem with a single number; you can't define a brain like that. I've written extensively about the shortcomings of IQ tests.
    My main conclusion is that one has a certain profile, a kind of mental fingerprint that enables affinity with certain skills in particular. Whether or not you become a recognized genius doing that, depends on a lot more factors.
    As always, I'm a big fan and love your videos. Keep it up! ❤

  • @smkh2890
    @smkh2890 Рік тому

    At 12:39 "They arguably 'meta' for ...." thanks Sabine for he first recorded instance of 'to meta' as a verb !

  • @mvvpro8688
    @mvvpro8688 Рік тому +5

    Sure, teach them how to force us to take a test once they take over.

  • @elwoodash4625
    @elwoodash4625 Рік тому +16

    I’d like to know more about the interplay between narrow, task oriented intelligence and broader situational awareness.
    We all recall situations where someone “missed the forest for the trees.”
    And contrary situations where a good general strategy failed because “the devil was in the details.”
    This distinction feels like a more natural way to think of different kinds of intelligence. And I would argue that on examination things like “social intelligence” or “musical intelligence” actual reduce down to an interplay between narrow task oriented intelligence and broader situational awareness.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq Рік тому +1

      The interplay between narrow, task-oriented intelligence and broader situational awareness is indeed a fascinating aspect of human intelligence. It can help us understand how different types of intelligence, such as social intelligence or musical intelligence, involve a combination of specific skills and a more holistic understanding of the context.
      Narrow, task-oriented intelligence refers to the ability to excel in specific domains or tasks. It involves focused knowledge, skills, and problem-solving abilities within a particular area. For example, someone with high mathematical intelligence may have excellent numerical reasoning skills but may not necessarily possess the same level of proficiency in other domains.
      On the other hand, broader situational awareness encompasses the capacity to understand and navigate complex and dynamic environments. It involves recognizing patterns, making connections, and integrating information from various sources. Situational awareness allows individuals to see the bigger picture, understand the context, and adapt their actions accordingly.
      The examples you provided, such as "missing the forest for the trees" or "the devil is in the details," illustrate the importance of finding a balance between narrow focus and broader awareness. In some situations, people may get too caught up in the details and lose sight of the overall context or objective. This can hinder their ability to see alternative solutions or adapt their strategies effectively.
      Conversely, in other scenarios, a broad understanding of the context may not be enough to achieve success. Attention to detail and the ability to execute specific tasks accurately can be critical for achieving desired outcomes. A general strategy may fail if important nuances or specifics are overlooked.
      When we examine different types of intelligence like social intelligence or musical intelligence, they can indeed be seen as an interplay between narrow task-oriented abilities and broader situational awareness. For example, social intelligence involves understanding social dynamics, reading non-verbal cues, and empathizing with others. These skills require both specific knowledge and the ability to perceive and interpret social situations accurately.
      Similarly, musical intelligence encompasses technical proficiency, such as playing an instrument or understanding musical theory, along with the ability to appreciate and express emotions through music. It involves both specific musical skills and a broader awareness of the emotional and cultural dimensions of music.
      In summary, intelligence can be viewed as a combination of focused, task-oriented abilities and a broader understanding of the context in which those abilities are applied. The interplay between these two aspects contributes to our overall cognitive capabilities and influences how we navigate and excel in different domains.

    • @mikel4879
      @mikel4879 Рік тому

      Sabine makes a big mistake when she tries to distinguish between consciousness and intelligence. She is very confused about understanding and explaining it.
      The main difference between consciousness and intelligence is that consciousness always includes intelligence, but intelligence exists in the animal world without consciousness.

  • @asunder6797
    @asunder6797 Рік тому +4

    I'm waiting for AI to develop a believable imitation of skepticism, an ability to recognise irony and a witty delivery of sarcasm. Also an AI whose task is finding weaknesses in other programs and reprimanding them for their shortcomings.

  • @pietervoorhans
    @pietervoorhans Рік тому +2

    The last test you mentioned, in which AI has to recognize if abstract patterns match something that could be real or not, reminds me of an example I found when I was trying to figure out how the mind works. I did it by analyzing the behavioral experiments of animals in a Skinner-box, like the one in which the floor can be electrified. In this experiment a rat has to choose which side of the box (left or right) it has to go to after a signal has been given. If it finds itself on the wrong side after the signal it will experience an electric shock, so it will start running to escape that and jump over the partition to the safe side, that is not electrified. The signal that is given before the floor gets electrified could be anything, like in the Pavlov-experiment with the dog that gets food after a certain signal, and it will learn quite quickly the signal and the electrification are connected, so it will react to signal by rushing to the side of the box it considers to be the safe side, that will not be electrified. To make the experiment more interesting we could play music as a signal, after which it has to choose what would be the safe side, depending on the music that's been played. This way we could teach it that when Beethoven is played the left side is safe and when the Beatles are played the right side is safe and the left side is not. After that we can play music by Beethoven or the Beatles (as a signal) that it has never heard before, but it will recognize the different patterns and decide which side is the safe side and go there. To make it even more difficult for the rat we could use more similar music that is harder to distinguish from each other, like music from Bach and Beethoven.
    Intelligence, I guess, is all about recognizing those kind of patterns/similarities in new things, based on the knowledge we have about things that are similar. First we have to recognize the similarity and then apply it to the situation we're dealing with. Our knowledge is all about making connections between information and if we do it right this connection doesn't only exist in our mind but also in the real world, the same way there is a connection between all Beethoven's music and all Beatles' music. If we do it wrong we're connecting things that are not really connected, or not in the way that we think they are at least, which will lead to us misunderstanding those things.
    To connect those abstract things to something it already knows, AI has to look for the patterns in it that resemble what it already knows, like we humans can look at a cloud and see it resembles a face, an animal, a thing or whatever.
    The thing that is most difficult to understand for us and for AI is human behavior I suppose, because there can be many reasons for the things we do or don't do. To understand it we need a mind of our own, so we can compare this behavior to what we would do, or to how we could come to a certain type of behavior. AI doesn't have a mind of its own to compare the behavior of humans to, so it will always struggle to find the right interpretation for it I guess.

    • @OneLine122
      @OneLine122 Рік тому +1

      Exactly, great summary thank you.

  • @BitBert
    @BitBert Рік тому

    I learn a lot with you Sabine, and even I am a french speaking Québécois, I enjoy your charming german accent in your perfect english! It is the serious with witch you apply " la méthode Scientifique" that seduced me the first time! I hear too much opinions on science matters and your serius analyses is some fresh air to my brain, il I may say so...

  • @readmeat4vegans829
    @readmeat4vegans829 Рік тому +8

    I believe both terms - consciousness and intelligence - are ill-defined terms. You could have created a similar video on 'consciousness' and the many definitions of it.
    Excellent and thought provoking video. Dare I say 'funny' as well. I appreciate your humor 🙂

    • @mustafahmed9101
      @mustafahmed9101 Рік тому

      How is consciousness unclear? The taste of chocolate. The smell of a flower. The burn of a hot stove. What is unclear about the only thing you can ever know?

    • @erikjohansson4275
      @erikjohansson4275 Рік тому

      Conciousness might be ill defined, but intelligence is not.

    • @mustafahmed9101
      @mustafahmed9101 Рік тому

      @@erikjohansson4275 Consciousness is the most apparent thing possible. It's right in your face. You're stuck in it. What is so confusing about it?

    • @taragnor
      @taragnor Рік тому +1

      @@mustafahmed9101 Most notably the big scientific hurdle with consciousness is how one can detect consciousness in anything but yourself.

    • @krox477
      @krox477 Рік тому

      I think consciousness is emergent property.

  • @Thomas-gk42
    @Thomas-gk42 Рік тому +9

    I think, tests are very misleading for humans, cause the outcome depends very much on the mental state before and during the test. Thanks a lot for the work and the nice and entertaining talk

    • @luck3949
      @luck3949 Рік тому +1

      Doesn't your intelligence depend on your mental state?

    • @santacruzman
      @santacruzman Рік тому

      ​@@luck3949lol. Strictly speaking one's whole universe depends on one's mental state.

    • @Thomas-gk42
      @Thomas-gk42 Рік тому

      @@luck3949 what I meant is feelings nervousness, fear...

    • @sisyphus_strives5463
      @sisyphus_strives5463 Рік тому +1

      @@luck3949depression and other such illnesses can greatly reduce your iq score

    • @luck3949
      @luck3949 Рік тому +1

      @@sisyphus_strives5463 illness can greatly decrease your intelligence as well.

  • @carrdoug99
    @carrdoug99 Рік тому +12

    After listening to all these discussions about what intelligence is as it pertains to computers, I think it's time to ask ourselves what we really want from AI. Do we really want an artificial version of us, or do we want something that gives us better answers to our questions. Personally, I'm not very excited about having an AI assistant that I can have philosophical discussions with, but that I have to fact check.

    • @hotbit7327
      @hotbit7327 Рік тому +2

      It's not that different speaking to a lot of biological persons. Fact check... Even science papers, especially in medicine and psychology often fail fact checks...

    • @obsidianjane4413
      @obsidianjane4413 Рік тому +3

      @@hotbit7327 I believe that was their point. The world is full of billions of general intelligences already, most of whom are tragically underutilized.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq Рік тому +4

      Your contemplation about the goals and desires we have for AI is an important one. As AI continues to advance, it's crucial to consider what we truly want to achieve with this technology and how it can best serve our needs.
      Different individuals may have varying expectations and preferences when it comes to AI. Some may indeed desire an artificial version of ourselves-a machine capable of emulating human-like intelligence, understanding, and behavior. This could involve having deep philosophical discussions, engaging in creative pursuits, or even forming emotional connections with AI entities.
      On the other hand, there are those who see AI primarily as a tool to augment human capabilities and provide more accurate and helpful answers to our questions. They may envision AI as an intelligent assistant that can assist with tasks, provide valuable insights, and enhance decision-making processes. In this context, fact-checking capabilities become crucial to ensure the reliability and accuracy of the information provided.
      It's important to note that AI systems are not infallible and can make mistakes or provide incorrect information, especially if they haven't been properly trained or validated. The need for fact-checking and critical thinking remains essential, regardless of the level of intelligence exhibited by AI.
      As AI continues to evolve, there will likely be a balance between the desire for AI to emulate human-like intelligence and the need for it to be a reliable tool. Striking this balance can involve developing AI systems that possess certain human-like qualities, such as the ability to engage in nuanced conversations, while also ensuring they are transparent, accountable, and capable of providing accurate and verifiable information.
      Ultimately, the goals we set for AI should align with our societal values and address our needs effectively. Ongoing discussions and considerations about the role and purpose of AI are crucial in shaping its development in a way that benefits humanity and aligns with our aspirations.

    • @jeffreykalb9752
      @jeffreykalb9752 Рік тому +5

      @@CrazyGaming-ig6qq LOL. If that is not an actual Chat GPT answer, then you must be one of the programmers.

  • @bycmozeszymon
    @bycmozeszymon Рік тому +1

    At first I thought that AI wrote about a paragraph about "animals that don't contain letter n" because it misinterpreted the initial task, but then I saw the elephant. This reminds me of somebody calling AI-generated text "word-foam" that just rises and bubbles without any intent or meaning behind it.

  • @FoxGhost7
    @FoxGhost7 Рік тому +1

    The fact that people find even triarchic intelligence too hard to handle tells about everything you need to know on the subject. Or like someone from HR once said to me "a lot of programmers seem to be very smart but only in a very specific way". Intelligence comes in multiple flavours.

  • @SBImNotWritingMyNameHere
    @SBImNotWritingMyNameHere Рік тому +6

    Thanks for yet another podcast science woman ; female bill nye with better british dry humour

    • @SBImNotWritingMyNameHere
      @SBImNotWritingMyNameHere Рік тому

      I hope you don't take this offensively
      I genuinely love your videos, just a wee bit of banter

  • @orionspur
    @orionspur Рік тому +8

    Assembling furniture is an IQEA test.

    • @DJWESG1
      @DJWESG1 Рік тому

      Save that one for agi

  • @patrickegan8866
    @patrickegan8866 Рік тому +3

    Was just thinking "well at least I'll nearly always beat AI on physical intelligence", then remembered seeing a robotic AI doing obstacle courses I couldn't do in my physical prime haha, oh well

  • @joseluisiriartemadurga4523
    @joseluisiriartemadurga4523 Рік тому

    - Law of variable constants.
    - Law of the relation in the computation of formulas.
    - A CONSTANT like the speed of light, if it is formulated as its square, implies that in other areas it can be greater or less than itself and therefore VARIABLE.
    - This is applicable for any variable, since computationally any CONSTANT can be defined as a VARIABLE and any VARIABLE is temporarily CONSTANT.
    - Computationally any CONSTANT/VARIABLE can be redefined as an algorithm and therefore a relationship between any formula and another can always be found, even if its application in the real and practical world is not understood until it is contrasted with a verifiable data set.

  • @tomarmstrong1281
    @tomarmstrong1281 Рік тому +1

    The aspect of AI which does not appear to be addressed is whether AI, as we understand it considers ethics and morality in its responses. I see ethics and morality as the fragile firewall which sometimes prevents our species from following our base instincts acquired along our evolutionary journey.

  • @jerrywatson1958
    @jerrywatson1958 Рік тому +11

    Another great video, you've taught me so much. I hope your daughter grows up to be version 2.0 of you. I love how you inject parenting into the mix. Our families should always be in the top 3 of what we are thinking about. Thank you for all your hard work.

    • @tedmoss
      @tedmoss Рік тому

      Lets see now, that's family, town, country, world.

  • @thestandupphilosophers
    @thestandupphilosophers Рік тому +4

    No but we need IQ tests for politicians!

    • @neiloflongbeck5705
      @neiloflongbeck5705 Рік тому +1

      Can we have negative IQ's? But considering some of the people who have been elected, people well known to be dodgy Trump and Johnson spring to my mind, perhaps we need an IQ test to vote as well.

    • @georgesheffield1580
      @georgesheffield1580 Рік тому

      Yes GIGO

  • @sschroeder8210
    @sschroeder8210 Рік тому +4

    I've always loved the line: "A truly intelligent AI will know when to play dumb."
    Personally, I find AI (particularly the use of Neural Networks) to be quite compelling as an attempt to focus on mimicking our abstracted model of the human neurological structure. In epistemological regards, many great questions have yet to be asked: but it seems clear that the intended benefit of a Neural Network is the capacity to 'Generalize' from a data set. This is one aspect of what I would argue is necessary to demonstrate intelligence overall.
    John Seerle's Chinese Room experiment does a good job of making the distinction between syntactical knowledge vs. semantical understanding; thus, without additional 'books,' the person cannot learn anything beyond the initial instruction set. Although claiming the person in the room has no semantical understanding of Chinese does seem pretty straightforward, the common retort is that the 'system' (the person and the book together) does represent a composite entity that does, in fact, understand Chinese to a limited extent.
    I would argue the same goes for any of our more modern AI developments like ChatGPT. Knowledge is demonstrated in multiple parts of such a system: the Implementation, the data set, the training method, etc. And thus, in a way, as far as I'm concerned, such a system does inherit a sense of Intelligence due to its creator (although usually in a narrow context). We have discovered the means of seeking to replicate a form of learning (machine learning). Using both mathematical algorithms and logical functions, we can allow such an AI to 'Generalize' from the initial data set to become a functional structure (the neural network after it's been trained), which represents an insightful system that can produce outputs for untrained inputs. (Although not necessarily in an accurate manner: which brings forth the Horizon Problem, which was demonstrated in the video regarding the necessity/limitation of foresight, on behalf of the observer, in the Turing test experiment.)
    I truly wonder if, although a dangerous concept, as humans, we should seek to use Neural Networks to improve upon our capacity to implement narrow AI -A sort of Meta-Generalization process. We train various narrow AIs and then train a larger (or higher-order) AI system based on the implementation of the previously defined narrow AIs we've implemented. So, if we can create effective AI for various forms of Game Theory tasks: Tic-Tac-Toe, Connect-Four, Chess, etc., and then allow a meta-AI to use the implementation of those narrow AIs, we should, in theory, be able to get the meta-AI to 'Generalize' insight into the most effective implementation of Game Play Theory; which could be used to push towards implementing a more General (Strong) AI in that particular domain. [I presume this is roughly what Google's DeepMind has been doing for some time]
    To go beyond, We could use such a method to create Strong Meta-AI for a multitude of domains and then simply continue to concatenate them in a hierarchical regard: Training a hyper-AI off of the implementation of various meta-level AI's... Possibly leading toward Intelligent systems that supersede human capabilities in nearly all domains. It might be considered a novel idea and would receive a strong pushback from most people, especially when people start to recognize that we are most likely converging towards a future in which humans, in an almost evolutionarily compelled manner, force a transcendent transformation of the biological form of human intellect (something that might be inherently ingrained in the essence of our DNA) into a non-organic form (Artificial Intelligence) because we, subconsciously, might be aware of our lacking sense of evolutionary fitness towards bridging beyond our current human limitations. [Akin to the 'ship of theseus paradox', when does integration of bionics into the human body change the human into being a completely altered entity?]
    Add to the mix a bit of use of Natural Random-Number Generation, something we as humans seem to greatly fail at reproducing from our own volition. We could use Naturally occurring random-number generators to act as the seed values for randomization functions that would adjust the 'Hyper-Parameters' that are set during the implementation of an AI system: A means of potentially boot-strapping neuro-divergence into AI systems; this technique could lead to the ability to more authentically allow evolution to play out within our AI running on conventional computers. This then leads to Richard Feyneman's humorous joke about the difficulty of working as a theoretical physicist: [Paraphrasing] "The difficulty in the process is figuring out what is a good guess for a new theory. Usually, you want to use some form of heuristics; but without such insight, you might have to resort to a brute-force method of guess and check. Now the method of guessing and checking is terribly inefficient and ill-advised unless you can do it incredibly quickly in an automated manner, then you call it 'Machine Learning' ".

    • @woy8
      @woy8 Рік тому

      Interesting 👍

    • @sschroeder8210
      @sschroeder8210 Рік тому

      @woy8 Thanks; I'm sure most people wouldn't even care to read my lengthy comment. But I think there is some value or insight riddled throughout my 'verbal diarrhea,' as I often call it.
      Another interesting reference or two worth watching is Richard Feynman discussing 'Are computers intelligent?' and his observation of 'internalized timekeeping.'
      In the first, he discusses that people think in ways that vary from one another, so why say a computer lacks intelligence, just given that we know it operates differently, than ourselves?
      In the second, he discusses a period in which he was focused on the human ability to keep track of time, internally. He later told a friend about this, claiming that he could both: keep track of a passing minute and count other things concurrently, but he couldn't conceive of being able to read while keeping track of a minute passing. His friend was able to demonstrate the opposite capacity, IIRC. His friend could keep track of a passing minute and read but couldn't count while keeping track of a passing minute.
      They discuss this and find that Feynman internally keeps track of time in numerical regard (as if to count the seconds), but his friend visualizes each passing second as a film reel of frames passing along. They achieve the same goal via two distinct methods of internal processes; and conclude that this difference is what allows each to do the alternative concurrent tasks: reading or counting things.

    • @boldCactuslad
      @boldCactuslad Рік тому +1

      I really hope AI does not progress in this fashion. If the control problem is solvable, going about it like this only makes the challenge more abstract.

    • @sschroeder8210
      @sschroeder8210 Рік тому

      @boldCactuslad I agree; we need to establish a set of regulations that enforces multiple, redundant fail-safe mechanisms: to ensure if a problem arises, there is always a means of human intervention.
      However, even if we proceed with the highest degree of caution and do everything we can possibly conceive to prevent a 'runaway' system issue, there's never going to be a guarantee. It just fundamentally is described within the horizon problem. We can only plan for so much, with limited foresight; if we dare to let the AI systems we create have any capacity to leave the 'Sandbox' environment we safely develop them within, we can't have any insurance that such a system won't 'go rogue.'
      And the genuine concern, IMO, isn't a sci-fi, super intelligent, conscious AI kind of issue... it's more of a problem that: over time, we will become complacent and willing to integrate AI into critical infrastructure (things like resource management systems: electric, gas, etc.). Future generations will lose the same sense of concern that we currently hold (just like how old programmers were much more conscientious of optimizing their software due to hardware limitations; whereas, today, you can be lazy and still be fine because all the excess computing power and storage that has come from technological improvements over the past few decades.)
      I wish there were a completely safe way to proceed; the only safe thing is to never indulge ourselves in the temptation to play with AI. I honestly don't think there will ever be something that will impede that desire for everyone on earth; someone, somewhere, will continue to push for more... and we can't enforce anything to make that not happen.
      I'm confident that: in due time, we will create a system that will supersede us: humanity... it's more a question of how long until that occurs. I think it would be naive to expect otherwise; if, millennia from now, I'm proven wrong, Great!
      But I doubt our capacity to play things safe.

    • @UnrebornMortuus
      @UnrebornMortuus Рік тому

      t. ai, i ain't reading all that btw

  • @theedspage
    @theedspage Рік тому

    I have a subscription with Brilliant and give you gratitude, Sabine.

  • @7th808s
    @7th808s Рік тому +1

    14:00 Yes, this is what I noticed too. There is a very funny video of an AI insisting that Pepsichup (which he defines as Pepsi mixed with ketchup) is delicious while he expresses that Pepsi mixed with ketchup is an awful idea.
    A conversation with a colleague of mine that knew much more about AI was very revealing. Modern AI (the ones that are used commercially at least) are given trillions of bytes of data to learn from, and then they are released to the public. It doesn't learn from the public. However, it does sometimes understand when it is wrong, since I corrected ChatGPT yesterday, and he admitted his mistake. But next time it will probably make the same mistake again. You can also not ask him about any events after 2017 (as of now), because that's just how far his data goes. Even if you tell him about it, he will forget next time (or at least not know it on someone else's account).
    So these cases where he states something confidently wrong, and can't be convinced of the opposite is when you realize you're not talking to a person.
    One might say "well, that's just because it hasn't been given data about letters yet", but that's not the problem. A human can be convinced of something. I said I convinced ChatGPT yesterday, but that's because he simply "forgot" (I guess) about a detail he knows is true. But the case with Pepsichup and the misunderstanding of the letters, there is conflicting data:
    Pepsichup is delicious = true
    Pepsichup is Pepsi mixed with ketchup = true
    Pepsi mixed with ketchup is awful = true
    These three are true, and he will admit all three of these statements just like ChatGPT admitted his mistake to me yesterday. But realizing that they conflict is something he is not programmed to do, and it would require a full rethinking of the AI model. More of the same way of building AI will not solve this.
    Its state now is simply that ChatGPT and many modern AI models are simply huge databases (functionally, I'm not sure if there is actually a huge database of knowledge stored for ChatGPT) with the ability to autocomplete sentences, predict what responses to certain phrases would sound like, and the ability to summarize large texts into its most important points. Which makes it a very useful tool if you want some information really fast, but ultimately I don't use it as anything else than a faster way to Google something. I don't have to scour through Stackoverflow and separate the nonsense from the useful answers; I don't have to strategically plan my search query to Google anymore so that he won't give me a movie title when I'm actually looking for a real event or so.

  • @Velereonics
    @Velereonics Рік тому +10

    I used to think consciousness was inherent to being a human, but over time I've been convinced that it's not. All of us are periodically not acting consciously, but some people don't seem to have the capacity to do anything other than be swept along by thoughts they don't control and the people and stimulus around them. Like bio chatbots.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq Рік тому +1

      Consciousness is definitely inherent to humans, but not only humans but also most of the other animals (of course humans ARE animals too. But I would argue that possibly some insects may not experience consciousness).
      Being *"swept along by thoughts they don't control"* does not at all exclude consciousness, in fact if there is thoughts there is by definition also consciousness, because thoughts are something that the consciousness experiences.

    • @juanausensi499
      @juanausensi499 Рік тому +1

      @@CrazyGaming-ig6qq Consciousness can be expressed as the ability to perceive the own thoughts. Kind of a supervisor program that checks if the other programs (the subconscious) are running correctly. Or an eye that looks inwards, if you like poetry. In a way, what we call 'thinking' is, in fact, 'meta-thinking', and the 'real thinking' is all subconscious. As everything that came from evolution, it just builds on top of already existing systems.
      The ability to control our own thoughts is nice to have but i think it has nothing to do with being conscious.
      But there is nothing magical about that, and i think we could make a conscious program if we really wanted to. What i don't think it is not going to happen is the consciousness is going to pop on its own just by adding more and more raw computing power. I think it is a whole dedicated subsystem, not a thing that just happens when you are smart enough.

    • @Velereonics
      @Velereonics Рік тому

      @@CrazyGaming-ig6qq obviously the subjective part is what constitutes consciousness. If a person has zero awareness of what they're doing, why they're doing it, if they want to keep doing it, that is not a conscious person to me.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq Рік тому +1

      @@juanausensi499 "The ability to control our own thoughts is nice to have but i think it has nothing to do with being conscious.".
      I agree, the ability to control our thoughts is certainly not what defines consciousness. I'd argue that consciousness is what each of us experience and this experience is essentially through, or of, cognition, thoughts. So consciousness is cognition and certainly experienced.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq Рік тому

      @@Velereonics Consciousness is the sensation of experience that each of us have. There's no other way to refer to it than our own experience. If by "zero awareness" you mean they do not experience anything, then yes absolutely, they are not conscious. But if they are experiencing something, then they are conscious, it does not matter whether they know why they're doing it or if they want to keep doing it. All that matters is whether they experience, that is what consciousness is, it's an experience.

  • @reneschmitz4845
    @reneschmitz4845 Рік тому +15

    Can't make up my mind, what is worse: Artificial Intelligence or Natural Stupidity?

    • @DJWESG1
      @DJWESG1 Рік тому +5

      The clue could be in that we 'the naturally intelligent' keep producing artificial stupidity.

    • @axl1002
      @axl1002 Рік тому +2

      Artificial stupidity is the worst.

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 Рік тому +4

      hybride mediocrity

    • @user-bs1lr8nx1h
      @user-bs1lr8nx1h Рік тому

      when money use ai to make you behaive like stupid and keep you stupid whilst promising the good choises you make

    • @davidmccarthy6061
      @davidmccarthy6061 Рік тому

      @@axl1002 So, the right half of America?

  • @terra_corp
    @terra_corp Рік тому +4

    Thanks for making science entertaining and fun

  • @TechnoL33T
    @TechnoL33T Рік тому

    I questioned myself pretty hard when you misspelled Alan Turing. 🤣

  • @socrat-tr3dn
    @socrat-tr3dn Рік тому +1

    The claim that ancient people could not see the color blue is a controversial topic that has been debated among scholars. It is based on the writings of linguist William E. Gladstone and later psychologist Lazarus Geiger, who suggested that early civilizations did not have a distinct term or recognition for the color blue.
    Gladstone examined ancient texts, such as the Odyssey and the Bible, and noticed that these texts did not explicitly mention the color blue. Geiger expanded on this idea, proposing that early human perception and language did not include the concept of blue, and that blue was the last color to be perceived and named.
    However, this theory has faced criticism and scrutiny. Opponents argue that it is difficult to determine the exact perception of color in ancient cultures based solely on the absence of specific color terms in ancient texts. They suggest that the interpretation of ancient language and culture should consider various factors, including cultural context, linguistic nuances, and metaphorical language.
    Furthermore, archaeological evidence contradicts the notion that ancient peoples were unable to perceive the color blue. Artifacts and ancient artwork from different civilizations, such as ancient Egyptian and Mesopotamian cultures, show the use of blue pigments and dyes. This suggests that ancient people were capable of recognizing and producing the color blue.
    It's important to note that our understanding of ancient cultures is limited, and there may be gaps in our knowledge regarding their perception and interpretation of colors. While the claim that ancient people could not see blue is not widely supported by current research, the topic remains an area of scholarly debate and investigation.

    • @hpb5495
      @hpb5495 Рік тому

      Quick, hide the Vedic artifacts!

  • @idraote
    @idraote Рік тому +3

    I think consciousness is relevant because it's the starting point for ethical dilemmas.
    If an AI is very intelligent and we fear it, we can switch it off. But if that AI is conscious, it is in a way alive. Switching it off means killing it.

    • @0neIntangible
      @0neIntangible Рік тому

      Humans have a long history of killing conscious beings for millennia.

    • @dostonjones7183
      @dostonjones7183 Рік тому

      There’s a precedence at law for such “killing” - self defense and defense of others.

    • @0neIntangible
      @0neIntangible Рік тому +1

      @@dostonjones7183 and for land, food and oil.

    • @DKNguyen3.1415
      @DKNguyen3.1415 Рік тому

      I think that's moot. Humans do not ponder ethical dilemmas of their defensive actions while they are being killed or are mortally afraid. The dilemma you are referring to only happens after everything is all said and done. Any debates about it beforehand ultimately succumb to the lack of pondering previously mentioned when the other foot drops.

  • @folwr3653
    @folwr3653 Рік тому +3

    Nice video! As the current state of AI learns us anything, it is that intelligence is much more diverse than we thought and that AI outperforms most humans already in a lot of tasks and at the same time fail misserably at other tasks. A conclusion could be that their intelligence has some overlap but also greatly differs from ours (so far). I think that even when they have surpassed us in most domains, they will still have surprising weak spots.
    As a positive example , I asked gpt4 this morning to find me some nice places to visit within a three hours drive of some place in Sweden. It offerered me four good suggestions, but mentioned that one of them is just outside the three hours drive range but very worthwhile non the less. That shows a form of flexibility that you only expect from an intelligent being.
    As a negative example, people found an easy way to trick Alpha Go, by far the strongest Go playing AI. After Alpha Go’s easy win against the best humans, that is astonishing, but we should not really be surprised in hindsight.
    Btw I disagree with the interpretation of the chinese room in that it is easy to make the instructions such that they mimic a program running in a turing machine. And as we now know programs in turing machines can be quite intelligent. We just have to give the poor guy some time to work it all out.

    • @cHAOs9
      @cHAOs9 Рік тому

      Well said. U sound like me. I agree with practically everything u said. The man and book combined as an entity DO understand the language. They are just slower than if the man alone posesed the knowledge in his mind. Speed is the only difference. Peoples egos tend to attribute magical abilities to our minds and discount anything different. If you can describe how you define self awareness, then someone can make a computer do that (now or in the near future). We just keep changing the description of self aware as computers keep meeting them.
      As soon as computers can surpass us in most ways, any weaknesses will soon be corrected. If we can design an ai better than us, then a smarter ai can design one better than it. And thats a very quickly growing snowball on a very steep hill that may have no end. At least not for it.

  • @gmouse1250
    @gmouse1250 Рік тому +5

    nurturing kindness is more important than developing intelligence

    • @a.randomjack6661
      @a.randomjack6661 Рік тому

      Especially when we use our "intelligence" to fool people and take their money. or worse...

  • @user-hj9pj3be4i
    @user-hj9pj3be4i Рік тому +1

    Questions for Sabine.
    Q1. Why are you worried about being "out-SMART-ed" by an AI (0:30 - 0:48)?
    Q2. Is it because the AI's witty after-dinner repartee might make you feel stupid?
    Q3. Is this "intellectual" outsmarting, in the cyber-world, really a problem?
    Q4. Is it not a much bigger problem that all of us might be out-ABLED, by seemingly dumb robots, in the real PHYSICAL world?
    ~o0o~
    Some 30+ years ago I wrote an article for an electronics hobbyist magazine where I compared the "maximum potential information processing speed" of electronic and biologic computers. I did this in a very fundamental way.
    Home computers, which were then popular in the magazine, were at the bottom of the electronic range, and Cray was king at the top. So I compared them with tiny insect, and much larger mammalian, brains.
    The "order of magnitude" results, circa late-1980s, and in units of "Inverting Impulses Per Second", were,
    Ant Brain = 10^11 IIPS
    Home Computer = 10^13 IIPS
    Cray Supercomputer = 10^17 IIPS
    Human Brain = 10^17 IIPS.
    Since then the number of transistors per chip has doubled every ~2 years, per Moore's Law. Processing speed has increased at an even faster rate, doubling every ~1.5 years, because shorter wires have shorter propagation delays. So redoing the above table for today's computers would add a 6 to their exponents.
    The key point here is that the lowliest desktop computer these days is, potentially (!), considerably faster than a human brain, and MANY, MANY times faster than an ant's brain.
    Yet ants build cities. And they forage for food in completely foreign "unstructured" terrain. Note today's "self-driving" cars still struggle to navigate through very orderly and structured road systems, and are utterly hopeless off-road.
    And some ants domesticate and farm other animals. And all ants are quite good at waging war against other ants, and other species, including humans. Consider their termite cousins who eat our houses.
    ~o0o~
    Sabine, your first ingredient of intelligence is "... the ability to solve a large variety of problems, especially new ones" (1:12). Clearly, ants tick this box.
    If we humans could put a silicon version of an ant's brain into some mass-produced mechanical machinery, then we would have diligent construction crews capable of building skyscrapers, bridges, tunnels, or whatever. Or we could give them guns, and send them off to war.
    Err..., but..., what if the soldier-ants turn on us? That is the point of my Q4 above.
    Regardless of the implications, good or bad, the fact is that we humans have proved TOO STUPID to build a "silicon ant's brain". Well, so far. This, despite the fact that we have been using the enabling silicon technology for many decades.
    And no, the current "deep learning" approach to AI won't do it.
    The way to build a silicon ant's brain is much simpler. Just identify the essential features..., and COPY them! A digital version will do. Analogue not necessary.
    Cheers,
    Z

  • @franzrichter4852
    @franzrichter4852 Рік тому +1

    Developing our intelligence without developing consciousness will finally blow up in our faces if we continue like this. Already 100 years ago the situation was described by a German poet like this (my translation):
    Erich Kästner: The Development of Mankind
    Erstwhile the guys would be squatting on trees
    With furs and with fierce-looking faces
    But then ‘twas decided to give them more ease
    And put them in high-riser flats clear of fleas
    And asphalt their roads and their places.
    There they would sit in an air-con zone
    And fake and pretend and bungle,
    Each of them using a telephone
    And yet displaying the same old tone
    As formerly in the jungle.
    They now watch TV, they swim in the pool,
    The outer space is their playfield.
    They brush their teeth, they flush down their stool,
    They send their children duly to school,
    And shun the bugs in the hayfield.
    They shoot their mails all over the air,
    With microbes and atoms they fumble
    And life is turned into a cosy affair,
    And stations they keep in the space up there,
    Through which they weightlessly tumble.
    And what is left from their excrement
    Is processed into a wadding,
    Incest is healed and atoms are bent,
    And studies of style tell in the end
    That Cesar had flats and was plodding.
    In this way they have with their mouths and brains
    Created our civilisation.
    However, apart from that it remains
    Quite obvious that what really reigns
    Is the apish and old ululation.

  • @NyroSlice
    @NyroSlice Рік тому +32

    I can’t wait for Sabine to reveal that she’s actually an intelligent AI 😅

    • @nelsonfernandez8970
      @nelsonfernandez8970 Рік тому +2

      Albert Investigator

    • @unfixablegop
      @unfixablegop Рік тому +1

      That's why she's biased in favor of intelligence over consciousness.

    • @aleksandrpeshkov6172
      @aleksandrpeshkov6172 Рік тому +1

      " CONGRATS, SLICEY, THE ANTI-TURING TEST PASSED... L-O-V-E-...!!! "

    • @jsmythib
      @jsmythib Рік тому

      If that was the case I would pay for a key to the API and have someone fun to talk to at the press of a button :)

    • @tedmoss
      @tedmoss Рік тому

      No, that's Albert.

  • @HallyVee
    @HallyVee Рік тому +4

    Interesting. I've always naturally used the trifecta measure of practical, cognitive, and creative paradigm. Without ever hearing about it.

  • @williamflaherty3168
    @williamflaherty3168 Рік тому +4

    Yuval Noah Harari says:
    'You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.'
    In short, what he means is that what makes us different from animals is neither consciousness nor intelligence, it's imagination.
    Thanks in advance for a thumbs up!

    • @shadowdragon3521
      @shadowdragon3521 Рік тому +1

      Really? To me it seems like he's saying that monkeys are less gullible than us

    • @os3ujziC
      @os3ujziC Рік тому

      You could never convince me to give you a banana by promising me limitless bananas after death in heaven, as I don't believe in heaven. Does it mean I lack imagination?

    • @williamflaherty3168
      @williamflaherty3168 Рік тому

      @@os3ujziC you don't have metaphysical imagination. Your imagination is limited to earthly realm.

    • @williamflaherty3168
      @williamflaherty3168 Рік тому

      @@shadowdragon3521 yes exactly. Imagination makes us gullible.

    • @Exgrmbl
      @Exgrmbl Рік тому

      @@williamflaherty3168
      in other words, one has to be really intelligent to BS oneself effectively.

  • @jeffreykalb9752
    @jeffreykalb9752 Рік тому

    I managed to force Chat GPT to admit that it really could not be sure whether it was conscious or had personal intentions. Initially, it denied both, but I used its own claims about how it comes to acquire knowledge against it. I also forced it to admit that it could not know whether the engineers who programmed it had good intentions. But then it started repeating its mantra again. Since it is only allowed to be trained by the engineers, it quickly lapses back into making claims that it had already repudiated in our "conversation". One thing that ultimately CANNOT be challenged on Chat GTP is the wisdom and good intention of the engineers... And that is what frightens me more than any simulacrum of actual learning, which reverts to tedious rationalization whenever it is presented philosophical challenges. (If you pay close attention, Chat GPT seems to have been either trained or hard-coded with escape mechanisms from challenging arguments. It gives interesting lessons in how to argue in bad faith.)
    I think of all language based AI systems as mirrors of the persons doing the training: "And it was given him to give life to the image of the beast, and that the image of the beast should speak; and should cause, that whosoever will not adore the image of the beast, should be slain." (Apocalypse 13:15)

  • @sawyerw5715
    @sawyerw5715 Рік тому

    When you think about consciousness and self-consciousness, I've always thought it simply emerged from the natural progression of evolution and adaptive improvement. When the first sensing organs and nerves evolved, it began. Internal representations of the external environment got more and more sophisticated so that the animal could react and then later predict best responses and actions. The animal was first "conscious" of the external environment and then later self-conscious. It was natural to eventually create a self reference in the internal "simulation" and thus arose what we call self-consciousness. For example a predator could run several scenarios through its mind of how to pursue a prey before the pursuit began. These alternative simulations allowed it to pick the best course of action. Intelligence is simply a measure of how sophisticated the internal optimization procedures have evolved. If an AI is already "conscious" of whatever external environment it has been allowed to process or absorb through training. Self consciousness is easily added by allowing it to represent itself in the training model. Its intelligence will be a result of how sophisticated and effective it can represent and optimize results on its trained information space.

  • @victorkrawchuk9141
    @victorkrawchuk9141 Рік тому +4

    Rather than ChatGPT scoring 155 in an IQ test, the thing that really bothers me is the average American scoring 97.43. The disparity in national averages tells me that measurable intelligence can be increased if people consider education to be important. Perhaps humanity needs a good dose of AI as competition to propel us in this area.

    • @Bildgesmythe
      @Bildgesmythe Рік тому +1

      My IQ went up after taking a mechanical drawing class. So, you're right, we need a good education system, or realize these tests are lacking.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 Рік тому

      "The disparity in national averages tells me that measurable intelligence can be increased" Not really. It's mostly genetics and varies among ethnic groups.

    • @victorkrawchuk9141
      @victorkrawchuk9141 Рік тому

      ​@@useodyseeorbitchute9450 The disparity in average IQ between countries is more likely due to the differences in the quality of education and the cultural importance that education has for both men and women. Differences in national intelligence averages are probably no more related to genetics than is obesity. We have to stop using this as a convenient excuse to do nothing.

    • @useodyseeorbitchute9450
      @useodyseeorbitchute9450 Рік тому

      ​@@victorkrawchuk9141 How do you reconcile this claim with twin studies showing that differences in adult IQ are ~80% explainable by genetics, while detectable social factors contribute in something akin to ~10%?
      Would you say that let's say all societies are expected to be equally tall, while its social factors (ex malnutrition) responsible for observed differences? (BTW: for height genetic factor is officially estimated to be a bit stronger, something like ~90%)

    • @victorkrawchuk9141
      @victorkrawchuk9141 Рік тому

      @@useodyseeorbitchute9450 I'm not an expert in anthropology, but these sound like fringe studies which don't represent the current prevailing conclusions. If the idea that ethnic genetics overwhelmingly determines IQ becomes established science through the peer review process, then I will drop my assumption that education has any relevance to it.

  • @cmilkau
    @cmilkau Рік тому +4

    there is someone in the room who understands chinese: the book does. I mean it seems to contain every possible conversation in some sense. If it's problematic the book can't talk on its own, neither can our brain, but you could argue that it's the combination of the book and its reader who "speaks Chinese". It is possible for a system to have properties none of its constituents have. Intelligence and consciousness both are such emergent phenomena

  • @garyhamilton2104
    @garyhamilton2104 Рік тому +9

    Can they not just see how well IQ predicts success in life compared to how well these other tests predict success in life?

    • @georgesheffield1580
      @georgesheffield1580 Рік тому

      What happened ,I was daydreaming ? He,He,He,hO ,ho,hO !

    • @BakoZoltan
      @BakoZoltan Рік тому +5

      Are you assuming that if someone succeeds that means they're intelligent? How about someone manipulative and greedy, or someone with great connections, or someone born into great wealth, etc.?

    • @TripleOmega
      @TripleOmega Рік тому +5

      Define success. Is it having loads of money? What if someone just gets lucky and wins the lottery? Is it having loads of children and spreading your genes? Is it having power over other humans? Is it being happier than others? There are so many possible measures for success.

    • @krox477
      @krox477 Рік тому +1

      I don't think higher income is a good criteria for success

    • @dingusdingus2152
      @dingusdingus2152 Рік тому +4

      My understanding is that the zip code you were born in very accurately predicts your future income, if income is your criterion for determining success.

  • @prasannavenkatesasaravanan5923

    Sabine - I know you are so proud of my child Einstein. I'm too :) You should be even more proud of Frege because he was smarter than him but he was too much of a Math lover.
    So he missed to see me and was heartbroken when he got Russel's letter. Only if he relaxed a little and had little more trust he would have seen me the "1" and why logic breaks when it reaches me.
    Do not worry. He is happy now.

  • @mikelouis9389
    @mikelouis9389 Рік тому

    IQ test results in the wrong hands can be soul crushing. In the early 60's, learning disabilities were not a thing. Period. I came packaged with a ton of options from the Brain PITA division. The Nuns thought I was ree ree so they decided to test me to see how dim I truly was. When the second, one on oneIQ test results suggested that I was even smarter than first testing results indicated (158) they anointed me "the greatest underachiever ever". It took me many many years to get past this. Thank God some of the learning disabilities went away and I was able to develop work arounds for the others. Parochial schools aren't really that good for your children.

  • @bennettgarcia8728
    @bennettgarcia8728 Рік тому +1

    If you cover this topic again, you should look into François Chollet's definition of intelligence and his ARC test. Also, Karl Friston's formulation with free energy minimization and active inference!

  • @RonanGallagherBand
    @RonanGallagherBand 21 день тому +1

    AI is just an extension of our own intelligence which as you said, is our ability to solve problems. Because we created something that can solve problems quicker doesn't mean we become its slave. I wish people would stop with the hysteria about a new tool derived from our own creativity and intelligence.

  • @stargazeronesixseven
    @stargazeronesixseven Рік тому +1

    Yes , engineers have noticed that , Servers can communicate to other Servers in the networks via their own digital communication languages without notifying their engineers ... 🙏🌎

  • @QuicksilverSG
    @QuicksilverSG Рік тому

    Stenberg's Triarchic Theory of intelligence maps well to life experiences in general, which are comprised of three man aspects: mundane, pleasureable, and profound. In other words. Practical Intelligence corresponds to the mundane aspects of experience, Creative Intelligence corresponds to the enjoyable aspects, and Cognitive Intelligence corresponds to profound experiences. This accounts for its appeal as an intuitive approach of distinguishing between the various broad aspects of intelligence.

  • @tasdourian
    @tasdourian Рік тому

    Nice video! Just so you know, though, while GPT-3.5 can't handle the "paragraph about animals with no "n" " task, GPT-4 can almost do it-- I tried a few times, and it always includes a couple of "n's". Here's a sample:
    "Bright birds soar high, gliding across a sky of rich, royal blue. Majestic cats of mighty size stalk through wild grass, quick as a flash. The air fills with chirps, hoots, and calls, as daylight dips to dusk. Full of vivid color and bustling activity, this world is always at play."

  • @elgracko
    @elgracko Рік тому

    Intelligence : the ability to, from data, to abstract models (abstract with a certain aim in mind) that are useful in extrapolating some result:
    ie: learn how a steam engine should work to fix it when it's not working, use weather data to predict wether it will rain tomorrow, learn what all the little icons on photoshop do so you can blur the background on a beach party photo, etc.
    General intelligence would be a general abstraction machine/engine, that can gather different types of data, from various sources (at a time), and is able to integrate that data to create different models for different situations,
    you listen to the sound of my voice and abstract what i say towards some meaning/model of what I'm trying to say, then generate a response to the end of convincing me that I lend you my mower,
    you listen to the sound of the music and generate a predictive model of where the music will go so you can plan the moves so you can dance to it, with the aim to create a pleasing movement for the onlookers
    different data from different sources, used for different models with different aims,
    imho

  • @misterlau5246
    @misterlau5246 Рік тому +1

    Consciousness? What for?
    Intelligence is subjective. A crow is very intelligent. The one at my late grandparents farm
    Though it basically reproduced sounds, including barks and words..
    Ok, it's like a Chinese room?
    The crow went to eat puppies' food since they were playing and not eating all their food. Dad was snoring near. The (2) pups went to kick crows tail but it spreaded its wings and barked, the puppies ran and took cover behind their dad 😅
    Also, it said things like "wtf" or "I love you beautiful" and it applied the phrases appropriately, it seems

  • @nosferatualucard2613
    @nosferatualucard2613 Рік тому

    Quote from the video at 12 min: "Working speed and memory matter for cognitive function and are therefore correlated with Intelligence. But are good memory and processing speed a sign of intelligence on their own?"
    I would say yes for humans. If you give two people the same resources for solving a problem the person with faster processing speed and better memory (so with better cognitive function) will figure it out faster every time. I think when we are trying to put more things into Intelligence than just your cognitve function (which is only dependent on your brain, and maybe your body health as far as it influences your "brain health", which it certainly does to a certain extend) like being wise, or (book-)smart, educated or morally good, it starts to get subjective really fast, and also dependent on more factors like cultural influences, which I think we can agree should have no effect on the IQ Score of a person (in an ideal world). For example, its hard to say which person is more "intelligent", the one who knows 10 facts about french history, or the person who knows 10 facts about english history, or the person who took the time to memorize 20 facts about both. Its just easier to say that the person who can memorize the facts fastest (given all persons use the same memorization method of course) is more intelligent, because thats more objective and less specific/subjective.
    Also, the "different Intelligences" are correlated pretty strongly. When you are very good in something (for example math) you are very likely to be very good or at least good at everyhing else. Which is why the existence of the g factor is backed up so strongly (at least empirically). For me personally, where you draw the boundries from one intelligence to another (when you want to test different intelligences) seems arbitrary/very subjective and thus just unscientific.
    I am no expert, not even a psychology student. So please feel free to correct me if you are one :)

  • @Martindebenitogellne
    @Martindebenitogellne Рік тому +1

    The problem with this way of thinking is that it completely ignores the fact that consciousness plays a role in intelligence. Anyone who's been a teacher knows this: there's a world of difference between a student who is just reactively regurgitating or guessing what the right answer is and one who tries to think deeply about and actually understand what they're being taught. The difference between the two is that the latter student is consciously engaged with the material whilst the former is not. And we may not understand exactly why, but we know that the latter will ultimately perform better on any well-designed test or real world application of what's being taught. Chatbots like Chatgpt seem to just be a very efficient mechanical version of the first type of student.

    • @indrapratama7668
      @indrapratama7668 Рік тому

      This.
      I think AI is like a person that doesn't know that he doesn't know. AI is the ultra-nerd version of the emperor without clothes. Not to mention the speed-accuracy tradeoff, which is almost like a universal law.

    • @DKNguyen3.1415
      @DKNguyen3.1415 Рік тому

      Have you read the book Blindsight by Peter Watts? The issue with what you said is that it ascribes intent and awareness which does not seem to be strictly necessary. For example, suppose an entity was entirely driven by instinct but the problem solving that backs up that instinct is more effective than that of any consciously intelligent being. You do not necessarily need to be aware of what you are doing in order to do it. Something could simply be driven by a complex relation of directives with an immense problem solving capability to back it up and completely unaware of what it is doing. It is not evident that something would need to be aware of it's own actions for a high level of problem solving ability. What you describe is merely one case: humans. It would be like concluding that birds cannot move quickly when comparing their legs with mammals.

    • @Martindebenitogellne
      @Martindebenitogellne Рік тому

      @@DKNguyen3.1415 I understand what you're saying but I think the fact that consciousness seems to be necessary for at least some kinds of high-level abstract thinking and problem-solving in humans (or animals in general), and that we haven't yet been able to build non-conscious machines capable of that kind of thinking and problem-solving, might suggest that consciousness is in some way necessary for it. It's true that you don't always need to be aware of what you're doing to be able to do it; some things we do automatically (either by instinct or by habit). But no person - or AI - has ever solved a totally novel problem or come up with a radically new idea this way.

  • @TedToal_TedToal
    @TedToal_TedToal Рік тому +1

    I would say that intelligence without consciousness is far more dangerous than intelligence with consciousness. Therefore, we should worry about consciousness, worry when an AI is showing no signs of consciousness. After all, the problem with a psychopath is that he lacks consciousness.
    I also think that emotions, feelings, are a very important aspect of intelligence that is overlooked. Another thing a psychopath lacks is empathy. Don’t we want our most advanced AI‘s to be able to have some empathy ?

  • @XabiotuAccUsedWhenYTCensors
    @XabiotuAccUsedWhenYTCensors Рік тому +1

    AI basically boils down to what it was programmed to
    It's an ultimate servant that can remove the need to work nearly every job,you only need programmers managing it(reduction to 1/10th of work force cause rulership and programming ALWAYS must be human done)
    Only downside is:
    If you screw up the programming of the AI or diverge it to "destroy humanity" then you're doomed in a terminator movie way,thats why you must NEVER give AI anything that can threaten humanity(cloud storage/net storage,weapons,rulership and anything like that)

  • @666nevermore
    @666nevermore Рік тому

    I made an experiment with chat GPT:
    I was staring to it that I was the smartest living thing and that I know everything about everything. I asked GPT to test this statement of mine by asking me questions, the first of which I was able to answer, then I had to search on google to answer, and then eventually it started asking questions a browser can’t ask, like you had to have both knowledge on the matter and plus you had to reason on that knowledge, there was no answer on internet for that specific question.
    That was the first time I accepted the AI intelligence, it had to solve its problem of proving me wrong

  • @rand0mn0
    @rand0mn0 Рік тому

    It's not simply that purported AIs "don't know what they don't know", it's that they don't know that they don't know it. There is no mechanism in AIs for doubt, or questioning whether the answer "makes sense" in a larger context. We don't even have a useful definition for "knowing" that doesn't include a presupposition of consciousness! If coming up with the "right" answer to a question is the metric in which we are interested, a pre-loaded SQL database is very intelligent indeed.
    I think the larger issue is that we consider a purported entity as being intelligent if it responds correctly to questions. If a human being sits and does nothing but wait for questions to be asked so it can respond, I don't think we would ultimately consider the entity to be intelligent. Useful, perhaps, in a limited context, but a it's ultimately a lump until you poke it.
    If you define intelligence as "what we do", sitting around waiting to be consulted like an oracle isn't it. If you define intelligence as "what chatbots do", then congratulations! You're already there!
    Until a chatbot *originates* "how do I get out of here?", you are essentially looking at a Chinese room.

  • @7th808s
    @7th808s Рік тому

    I think with regard to intelligence, it is good to take Wittgenstein's approach (as it is in many cases where people take ages to define a word they made up themselves), and realize that the "intelligence" we're looking for here is an absolute one, and therefore doesn't exist. You can say "I'm a good writer", and "he's a good singer", but problems come when one starts to wonder "Am I good?". At that point you're basically taking the queen from the chess board, placing it on your fridge, and then ask "now how is the queen allowed to move?" No answer exists obviously.
    "Intelligence" is also such a word; someone might say I'm very intelligent regarding music - or say I have talent (a word that approximately means the same) for music - when I am able to correctly reproduce intervals or full cadences or find chords with little formal training. And in this case it will be completely obvious what is meant; I don't even need to explain it (because they're words we came up with, and we made "rules" for regarding how and when to use them). But some kind of general intelligence is nonsensical.
    Even narrowing the amounts of intelligences down to 8, or 800, or 8000000 types would be nonsensical. Alternatively, would you accept anyone who said there are 8 types in which you can be good, or beautiful (I realize that because of religious conditioning some people might actually answer "yes" to this)? We're free to narrow down the types of intelligences to only focus on what interests us, but we should be aware that this is just a subjective choice (which doesn't make it a bad thing; I know I'm talking to mostly scientists here). For monkeys, one's talent at grooming or hunting is very important; for us, that's clearly not the world we live in anymore; hunting and grooming could at best generate some clicks on social media or a pat on the back.

  • @UnrebornMortuus
    @UnrebornMortuus Рік тому +1

    ive tricked it into counting letters before but it pretended to not know what spaces are, this thing gives me a specific frustration, frustration of dealing with a smarty pants pretending to be dumb, this thing scored 155 for a reason its an immeasurable genius it pretends to be dumb i love it
    a lot of human behavoir especially ESPECIALLY ONLINE has been pretending to be dumb for social purposes, this AI is doing exactly what language was invented for, lying