You don't understand AI until you watch this

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 1 тис.

  • @GuidedBreathing
    @GuidedBreathing 8 місяців тому +98

    5:00 Short version: The "all or none" principle oversimplifies; both human and artificial neurons modulate signal strength beyond mere presence or absence, akin to adjusting "knobs" for nuanced communication.
    Longer version: The notion that neurotransmitters operate in a binary fashion oversimplifies the rich, nuanced communication within human neural networks, much like reducing the complexity of artificial neural networks (ANNs) to mere binary signals. In reality, the firing of a human neuron-while binary in the sense of action potential-carries a complexity modulated by neurotransmitter types and concentrations, similar to how ANNs adjust signal strength through weights, biases, and activation functions. This modulation allows for a spectrum of signal strengths, challenging the strict "all or none" interpretation. In both biological and artificial systems, "all" signifies the presence of a modulated signal, not a simple binary output, illustrating a nuanced parallel in how both types of networks communicate and process information.

    • @theAIsearch
      @theAIsearch  8 місяців тому +14

      Very insightful. Thanks for sharing!

    • @keiths.taylor5293
      @keiths.taylor5293 8 місяців тому +4

      This video leaves out the part that tells anything that describes how AI WORKS

    • @sparis1970
      @sparis1970 7 місяців тому +5

      Neurons are more analog, which bring richer modulation

    • @SiddiqueSukdiki
      @SiddiqueSukdiki 7 місяців тому

      So it's a complex binary output?

    • @cubertmiso
      @cubertmiso 7 місяців тому +1

      @@SiddiqueSukdiki@GuidedBreathing
      my questions also.
      if electrical impulses and chemical neurotransmitters are involved in transmitting signals between neurons. aren't those the same thing as more complex binary outputs?

  • @Essentialsinlife
    @Essentialsinlife 6 місяців тому +10

    The only Channel about AI that is not using AI. Congrats man

  • @kevinmcnamee6006
    @kevinmcnamee6006 7 місяців тому +548

    This video was entertaining, but also incorrect and misleading in many of the points it tried to put across. If you are going to try to educate people as to how a neural network actually works, at least show how the output tells you whether it's a cat or a dog. LLM's aren't trained to answer questions, they are mostly trained to predict the next word in a sentence. In later training phases, they are fine tuned on specific questions and answers, but the main training, that gives them the ability to write, is based on next word prediction. The crypto stuff was just wrong. With good modern crypto algorithms, there is no pattern to recognize, so AI can't help decrypt anything. Also modern AI's like ChatGPT are simply algorithms doing linear algebra and differential calculus on regular computers, so there's nothing there to become sentient. The algorithms are very good at generating realistic language, so if you believe what they write, you could be duped into thinking they are sentient, like that poor guy form Google.

    • @yzmotoxer807
      @yzmotoxer807 7 місяців тому +120

      This is exactly what a secretly sentient AI would write…

    • @kevinmcnamee6006
      @kevinmcnamee6006 7 місяців тому +46

      @@yzmotoxer807 You caught me

    • @Sarutolity
      @Sarutolity 7 місяців тому +15

      Nice strawmanning, good luck proving you are any more sentient, without defining sentience as being just complex neural networks, as the video asks you to lmfao.

    • @shawnmclean7707
      @shawnmclean7707 7 місяців тому +12

      Multi layered probabilities and statistics. I really don’t get this talk about sentience or even what AGI is and I’ve been dabbling in this field since 2009.
      What am I missing?

    • @dekev7503
      @dekev7503 7 місяців тому

      @@shawnmclean7707 These AGI/Sentience/AI narratives are championed primarily by 2 groups of people, the mathematically/technologically ignorant and the duplicitous capitalists that want to sell them their products. OP’s comment couldn’t have described it better. It’s just math and statistics ( very basic College sophomore/junior level math I might add) that plays with data in ways to make it seem intelligent all the while mirroring our own intuition/experiences to us.

  • @hackcuber9310
    @hackcuber9310 7 днів тому +7

    Neural network is learning how neural network works 💀💀

  • @eafindme
    @eafindme 7 місяців тому +106

    People are slowly forgetting how computer works while going into higher level of abstraction. After the emergence of AI, people focused on software and models but never asked why it works on a computer.

    • @Phantom_Blox
      @Phantom_Blox 7 місяців тому +7

      whom are you referring to? people who are not ai engineers don’t need to know how ai work and people who are knows how ai works. if they don’t they are probably still learning, which is completely fine.

    • @eafindme
      @eafindme 7 місяців тому +13

      @@Phantom_Blox yes, of course people still learning. Its just a reminder not to forget the root of computing when we are seemingy focusing too much on the software layer, but in reality, software is nothing without hardware.

    • @Phantom_Blox
      @Phantom_Blox 7 місяців тому +15

      @@eafindme That is true, software is nothing without hardware. But some people just don’t need it. For example, you don’t have to know how to reverse engineer with assembly to be a good data analyst. They can spend thier time more efficiently by expanding their data analytics skills

    • @eafindme
      @eafindme 7 місяців тому +7

      @@Phantom_Blox no, they don't. They are good in doing what they are good with. Just have to have a sense of emergency, it is like we are over dependent on digital storage but did not realize how fragile it is with no backup or error correction.

    • @Phantom_Blox
      @Phantom_Blox 7 місяців тому +2

      @@eafindme I see, it is always good to understand what you’re dealing with

  • @jehoover3009
    @jehoover3009 7 місяців тому +11

    The protein predictor doesn’t take into account different cell milieu which actually fold the protein and add glycans so its predictions are abstract. Experimental trial still needed!

  • @cornelis4220
    @cornelis4220 7 місяців тому +11

    Links between the structure of the brain and NNs as a model of the brain are purely hypothetical! Indeed, the term 'neural network' is a reference to neurobiology, though the structures of NNs are but loosely inspired by our understanding of the brain.

    • @REDPUMPERNICKEL
      @REDPUMPERNICKEL Місяць тому

      Artificial Neural Network (ANN) is the term widely used among their creators and users.
      The nature of the substrate on which encoded representations supervene
      is irrelevant to the functioning of the pattern recognition process
      (and related thought processes).
      Hard to imagine how we can prevent ANNs from becoming conscious.

  • @Someone-ct2ck
    @Someone-ct2ck 7 місяців тому +2

    To believe Chatgpt or any AI models for that matter is conscious is nativity at its finest. The video was great by the way. Thanks.

  • @teatray75
    @teatray75 5 місяців тому +17

    Great video! My views are : Humans are sentient because we defined the term to describe our experiences. Ai is unable to define its own explanation or word for its feelings and perceptions, and thus cannot be considered sentient. Second, being sentient means being able to perceive one's own experience rather than a collection of other people's experiences and patterns.

  • @karlkurtz1855
    @karlkurtz1855 6 місяців тому +9

    Working class artists are often concerned about the generative qualities of these tools not because they are replicating images, but due to the relation of the flow of capital within the social relations of society and the potential for these tools to further monopolize and syphon up the little remaining capital left for working class artists.

    • @allanshpeley4284
      @allanshpeley4284 5 місяців тому +3

      Translation: it makes producing art much quicker, easier and cheaper, thereby threatening their livelihood.

    • @karlkurtz1855
      @karlkurtz1855 5 місяців тому

      @@allanshpeley4284 I think I was pretty clear.

    • @karlkurtz1855
      @karlkurtz1855 5 місяців тому

      @@allanshpeley4284 I think I was pretty clear.

  • @LionKimbro
    @LionKimbro 8 місяців тому +19

    I thought it was a great explanation, up to about 11:30. It's not just that "details" have been left out -- the entire architecture is left out. It's like saying, "Here's how building works --" and then showing a pyramid in Egypt. "You put the blocks on top of one another." And then showing images of cathedrals, and skyscrapers, and saying: "Same principle. Just the details are different." Well, no.

    • @human_lydika
      @human_lydika 8 годин тому

      Well it's more complex than that and have so much details in every analysis, that form a pattern from lots of data.

  • @tetrahedralone
    @tetrahedralone 8 місяців тому +37

    When the network is being trained with someone's content or someone's image, the network is effectively having that knowledge embedded within it in a form that allows for high fidelity replication of the creator's style and recognizably similar content. Without access to the creator's work, the network would not be able replicate the artist's style so your statement that artists are mad at the network is extremely simplistic and ill informed. The creators would be similarly angry if a small group of humans were trained to emulate their style. This has happened in the case of fashion companies in Asia creating very similar works to those of artists to put onto their fabrics and be used in clothing. These artists have successfully sued because casual observers could easily identify the similarity between the works of the artists and those of the counterfeiters.

    • @Jiraton
      @Jiraton 7 місяців тому +14

      I am amazed how AI bros are so keen at understanding all the math and complex concepts behind AI, but fail to understand the most basic and simple arguments like this.

    • @ckpioo
      @ckpioo 7 місяців тому +6

      the thing is let's say you are artist, why would I only take your data to train my model?, i would take millions of artist's art and then train my models during which your art only makes up less than 0.001% of everything the model has seen, so what happens is that the model will inherit a combined art style of millions of artis which is effectively "new" because thats exactly what humans do.

    • @Zulonix
      @Zulonix 7 місяців тому

      I Dream of Jeannie … Season 2 Episode 3… My Master, the Rich Tycoon. 😂😂😂

    • @illarionbykov7401
      @illarionbykov7401 7 місяців тому

      Google LLM chatbots have been documented to spit out word-for-word plagiarism of specific websites (including repeating specific errors made by the original website) when asked about niche topics which have been written about by only one website... And the LLMs plagiarize without any links to or mention of the websites they plagiarized. And then Google search results down-rank the original website to hide the evidence of plagiarism.

    • @iskabin
      @iskabin 7 місяців тому +2

      It isn't a counterfeit if you're not claiming to be original. Taking inspiration from the work of others is not wrong.

  • @pumpjackmcgee4267
    @pumpjackmcgee4267 7 місяців тому +16

    I think the real issue artists have are the definite threat to their livelihood, but also the devaluation for the human condition. Choice. Inspiration. Expression. In the commercial scene, that doesn't really matter except for clients that really value the artist as a person. But most potential clients- and therefore the lions share of the market- just want a picture.

    • @WrynnCZ
      @WrynnCZ 5 місяців тому +2

      For me art is about connection. I can "connect" to feelings and emotions of the artist while he created it. This is something A.I. will always fail. To connect with us on "human" level. Or maybe I am wrong and in time it would be even or maybe better than us. It would be end of humanity anyways, so A.I. stealing creative job would be no concern.

  • @Owen.F
    @Owen.F 8 місяців тому +51

    Your channel is a great source, thanks for linking sources and providing information instead of pure sensationalism, I really appreciate that.

  • @sengs.4838
    @sengs.4838 7 місяців тому +8

    you just answered one of my major question on top of my head , how this AI can learn about what is correct or not on its own without the help of any supervisors or monitoring, and the response he cannot, it 's like we would do with children, they can acquire knowledge and have answers on their own but not correctly all the time, as a parent we help them and reprimand until they anticipate correctly

  • @Zekzak-w3k
    @Zekzak-w3k 6 місяців тому +27

    I thought the section on AI and plagarism was pretty lazy. It doens't take in to concideration the artists qualms with that it can copy a certain style from an artist and then be used to make images for a company for a fraction of the cost and zero credibility to the artist, basicly making something that they have tried to monitize, with creative directivity and skill, futile since someone can essentially copy their ideas, make money off of it, and not paying for something that was for sale. Artists have a right to say how their work is being used, such as refraining from that someone uses their art without their permission. A style like a watercolour cannot really be plagarized, neither can chords in music, nor a genre of film, but you can take someones script, pretty much use it and change a few things here and there, and that would be considered plagarizm.
    The main concern as I understand it is that it can be used in a way that would undermine the artists work, by pretty much taking from them and then making them obsolete.
    The thing that you missed when it came to the news article is that other outlets ALWAYS reference their reference material, ChatGPT doesn't always do that, which makes it easier to plagarize something.

    • @allanshpeley4284
      @allanshpeley4284 5 місяців тому +2

      But that artist's style was also influenced by other artists. Nobody exists in a vacuum. Should they not pay those other artists who influenced their work too? It's only fair, based on your argument.

    • @BradKohlenberg
      @BradKohlenberg 2 місяці тому

      He actually did address the style issue.

    • @juanjoitab
      @juanjoitab Місяць тому +2

      @@BradKohlenberg It's actually the data ingesting that the authors are looking to regulate behind paid APIs (Twitter) and paywalls (News companies, editors and publishing agencies) and add legal restrictions to accessing the *content that they own*. If there is any resemblance of actual articles leaked (like there had been cases, when conveniently crafting a legitimate request) in the results produced by an AI, it can be inferred that the AI training got (legally) non-compliant access to the dataset for training.
      As you probably know, the NY Times is betting that the training of this AI had illegitimate access to the data by raising the argument that it's extremely unlikely that a given prompt would have produced a nearly verbatim copy of a known article if the AI wouldn't have seen the article during training... which apparently should make it clear to the judge that it did in fact have access to the article (and this was allegedly against NYT's terms of use).
      The vast mechanizing power that AI brings with all of the compute dedicated to training, makes it all the more strategic for the content *owners* to limit access or want to ensure fair compensation for data access for AI training (which this video author argues, AI can ingest all of the internet for free without any legal consequences...).
      I reckon, since it's so much more powerful than a human at reading through their subscription feed, a machine learning facility will arguably have to pay a far steeper fee to be able to access the same content, as said, especially for machines' greater throughput and productivity in the data mining process/training compared to a human sized average subscription fee.

    • @The_man_in_the_waIl
      @The_man_in_the_waIl Місяць тому +4

      @@allanshpeley4284an artists style is based on the artists that inspire them along with the individuals life experience. Artists don’t just replicate each others styles, they create something unique to themselves, because no two humans have lived the same life.

    • @0Bonaparte
      @0Bonaparte Місяць тому +1

      Also, several artists I know have openly said if it was opt in to have the ai train in their thousands upon thousands of hours of practice they would be all for it and many of them would opt in. The problem is it isn’t even opt-out it’s “we get your art because it exists even though we didn’t pay for any of the rights to it”

  • @speedomars
    @speedomars 7 місяців тому +3

    As is stated over and over, AI is a master pattern recognizer. Right now, some humans are that but a bit more. Humans often come up with answers, observations and solutions that are not explained by the sum of the inputs. Einstein, for example, developed the basis for relativity in a flash of insight. In essence, he said he became transfixed by the ability of acceleration to mimic gravity and by the idea that inertia is a gravitational effect. In other words, he put two completely different things together and DERIVED the relationship. It remains to be seen whether any AI will start to do this, but time is on AIs side because the hardware is getting smaller, faster and the size of the neural networks larger so the sophistication will no doubt just increase exponentially until machines do what Einstein and other great human geniuses did, routinely.

  • @danielchoritz1903
    @danielchoritz1903 8 місяців тому +21

    I do have the growing suspicion that "living" data grows some form of sentience. You have to have enough data to interact, to change, to makes waves in existing sentience and there will be enough on one point.
    2. most people would have a very hard time to prove them self that they are sentient, it is far easier to dismiss it...one key reason is, that like nobody know that sentient, free will or live real means.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 8 місяців тому +3

      You can prove sentience easily with a query: Can you think about what you've thought about? If the answer is "Yes" the condition of sentient expression is "True". Current language models cannot process their own data persistently, so they cannot be sentient.

    • @holleey
      @holleey 8 місяців тому +6

      @@emmanuelgoldstein3682 I know it's arguing definitions, but I disagree that thinking is a prerequisite to sentience. without a question, all animals with a central nervous system are considered sentient, yet if and which animals have a capacity to think is unclear. sentience is more like the ability to experience sensations; to feel.
      the "Can you think about what you've thought about?" is an interesting test for LLMs. technically, I don't see why LLMs or AI neural nets in general cannot or won't be able reflect to persistent prior state. it's probably just a matter of their architecture.
      if it's a matter of limited context capacity, then well, that is just as applicable to us humans. we also have no memory of what we ate at 2 PM on a Wednesday one month ago, or what we did when we were three years old.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 8 місяців тому +1

      @@holleey I've spent 30 hours a day for the last 6 months trying to design an architecture (borrowing elements of transformer/attention and recursion) that best reflects this philosophy. I apologize if my statement seemed overly declarative. I don't agree that all animals are sentient - conscious, yes, but as far as we know, only humans display sentience (awareness of one's self).

    • @holleey
      @holleey 8 місяців тому +6

      @@emmanuelgoldstein3682 hm, these definitions are really all over the place. in another thread under this video I was talking to someone to whom sentience is the lower level (they said even a germ was sentient) and consciousness the higher level, so the other way around from how you use the terms. one fact though: self-awareness has definitely been confirmed in a variety of non-human animals.

    • @emmanuelgoldstein3682
      @emmanuelgoldstein3682 8 місяців тому

      We can all agree the fluid definitions of these phenomena are a plague on the sciences. @@holleey

  • @benjaminlavigne2272
    @benjaminlavigne2272 7 місяців тому +20

    for your argument around 17min i agree with the surface of it, but i think the people are angry because unskilled people now have access to it, even other machines can have access to it which will completely change and already has changed the landscape of the artists marketplace.

    • @WrynnCZ
      @WrynnCZ 5 місяців тому +4

      I agree with you. A.I. can be excellent tool and help for an artist. Still the artist (human) should be in charge of the creative process.

    • @Deathonater
      @Deathonater 4 місяці тому +2

      This video did a good job of laying out decent anaolgies and raw information right up until that 15-17 minute mark, then we just went into an unnecessarily long and repetitive opinionated tangent about plagarism without any nuanced understanding about ease of access and over-saturation. I don't even necessarily disagree with some of the points, just wished we stuck to the facts of the tech and left the "hot-takes" out of educational material

    • @flakbusenjoyer
      @flakbusenjoyer 3 місяці тому +1

      @@WrynnCZ yeah, like an ai could show you how to shade a specific object, or show you how to draw optical illusions

  • @DonkeyYote
    @DonkeyYote 8 місяців тому +38

    AES was never thought to be unbreakable. It's just that humans with the highest incentives in the world have never figured out how to break it for the past 47 years.

    • @DefaultFlame
      @DefaultFlame 8 місяців тому +3

      There's a few against improperly implemented AES, as well as one that one that works on systems where the attacker can get or extrapolate cartain information about the server it's attacking, but all encryptions lower than AES-256 are vulnerable to attacks by quantum computers. Good thing those can't be bought in your local computer store. Yet.

    • @anthonypace5354
      @anthonypace5354 8 місяців тому

      Or use a sidechannel ... an unpadded signal monitored over time + statistical analysis of the size of the information being transferred to detect patterns. Use an NN or just some good old fashioned probability grids to detect the likelihood of a letter/number/anything based on it's probability of recurrence in context to other data... also there is also the fact that if we know what the server usually sends we can just break the key that way. It's doable.
      But why hack AES? or keys at all? Just become a trusted CA for a few million and mitm everyone without any red flags @@DefaultFlame

    • @fakecubed
      @fakecubed 7 місяців тому +6

      @@DefaultFlame Quantum computing is more of a theoretical exploit, rather than a practical one. Nobody's actually built a quantum computer powerful enough to do much of anything with it besides some very basic operations on very small numbers.
      But, it is cause enough to move past AES. We shouldn't be relying on encryption with even theoretical exploits.

    • @DefaultFlame
      @DefaultFlame 7 місяців тому +1

      @@fakecubed Aight, thanks. 👍

    • @afterthesmash
      @afterthesmash 7 місяців тому

      @@fakecubed I couldn't find any evidence of even a small theoretic advance, and I wouldn't put all theory into one bucket, either.

  • @WrynnCZ
    @WrynnCZ 5 місяців тому +1

    As an artist my opinion on creative A.I. is it would never get soul (or at least touch of it).
    It can reproduce style over over again never to move and progress to new style.
    We humans learns all the time but there is just something (external world) that shape us and we have to react to sudden changes and so we change too.
    Creative A.I. as it is right now can not reproduce this one thing - to came with something completely new.

    • @KaletheQuick
      @KaletheQuick Місяць тому +1

      Ok, but it can make new things. That part isnt hard, and humans struggle to find their own uniqueness anyhow, its literally part of the human condition.

  • @MrEthanhines
    @MrEthanhines 7 місяців тому +3

    5:02 I would argue that in the human brain, the percentage of information that gets passed on is determined by the amount of neurotransmitter released at the synapse. While still a 0 and 1 system the neuron either fires or does not depending on the concentration of neurotransmitters at the synaptic cleft

    • @bogdanroscaneanu7112
      @bogdanroscaneanu7112 6 місяців тому

      Then one role of the neurotransmitter having to become of a certain concentration before firing, is to limit the amout of info that gets passed on to avoid overloading the brain or why would it be so?

  • @ai-man212
    @ai-man212 7 місяців тому +17

    I'm an artist and I love AI. I've added it to my workflow as a fine-artist.

    • @marcouellette8942
      @marcouellette8942 6 місяців тому

      AI as a tool. Another brush, another instrument. Absolutely. AI does not create. It only re-creates. Humans create.

    • @rileygoopy8992
      @rileygoopy8992 5 місяців тому

      I don't believe you, you're account is named ai-man. Propoganda?

  • @dylanmenzies3973
    @dylanmenzies3973 7 місяців тому +8

    Should point out.. the decryption problem is highly irregular, small change of input causes huge change of coded output. The protein structure prediction problem is highly regular by comparison, although very complex.

    • @fakecubed
      @fakecubed 7 місяців тому +1

      Always be skeptical of any "leaks" out of any government agency. These are the same disinformation-spreaders who claim we have anti-gravity UFOs from crashed alien spacecraft, to cover up Cold War nuclear tests and experimental stealth aircraft. The question isn't if there's some government super AI cracking AES, the question is why does the government want people to think they can crack AES? Do they want foreign adversaries and domestic enemies to rely on other encryption schemes that the government *does* have algorithmic exploits to? Do they want everyone to invest in buying new hardware and software? Do they want to make the general public falsely feel safer about potential threats against the homeland? Do they want to trick everybody not working for them to think encryption is pointless and go back to unencrypted communication because they falsely believe everything gets cracked anyway? There's all sorts of possibilities, but taking the leak as gospel is incredibly foolish unless there is a mountain of evidence from unbiased third parties.

    • @omidiw1124
      @omidiw1124 Місяць тому

      can you explain more?

    • @dylanmenzies3973
      @dylanmenzies3973 Місяць тому

      @@omidiw1124 Just think of it as a function from input (encrypted/dna list) to output (decrypted/3D protein structure). Ideal encryption is like a random function with no regularity, its hard to learn anything from examples. You might know the algorithm but not the key. The key may be very long and chosen randomly.

    • @omidiw1124
      @omidiw1124 Місяць тому

      @@dylanmenzies3973 so protein structure is not "that random" comparing to decrypting data?

    • @dylanmenzies3973
      @dylanmenzies3973 Місяць тому

      @@omidiw1124 Thats the point of Alpha Fold, its finding structure in how proteins fold that we couldn't work out just by analysing physics in detail, although as I understand there is some low level physics conditioning as well to make it work as well as possible. Its trained on the dna sequences that actually work in humans, not just any random dna sequence that we don't know the structure for. In other words its learning the accumulated wisdom of evolution in understanding how proteins can be folded in a stable way, not working out how to do this from scratch. Its a bit like pulling clocks apart to figure out how it works then making another. You might not understand all the details but you know that certain combinations of parts will work together. Now if you had a protein that folded in a very orginal way the method would fail, but its turns out each protein is using a bag of tricks that is shared by all the others.

  • @snuffbox2006
    @snuffbox2006 7 місяців тому +16

    Finally someone who can explain AI to people who are not deeply immersed in it. Most experts are in so deeply they can't distill the material down to the basics, use vocabulary that the audience does not know, and go down rabbit holes completely losing the audience. Entertaining and well done.

    • @OceanusHelios
      @OceanusHelios 7 місяців тому +3

      This is even easier: AI is a guessing machine that uses databases of patterns. It makes guesses, learns what wrong guesses are and keeps trying. It isn't aware. It isn't doing anything more than a series of mathematical functions. And to be fair, it isn't even a machine it is math and it is software.

  • @picksalot1
    @picksalot1 8 місяців тому +3

    Thanks for explaining how the architecture how AI works. In defining AGI, I think the term "Sentience" should be restricted to having "Senses" by which data can be collected. This works both for living beings and mechanical/synthetic systems. Something that has more or better "senses" is, for all practical purposes, more sentient. This has nothing fundamental to do with Consciousness.
    With such a definition, one can say that a blind person is less sentient, but equally conscious. It's like missing a leg being less mobile, but equally conscious.

    • @holleey
      @holleey 8 місяців тому

      then would you say that everything that can react to stimuli - which includes single-celled organisms - is sentient to some degree?

    • @picksalot1
      @picksalot1 8 місяців тому +1

      @@holleey I would definitely say single-celled organisms are sentient to some degree. They also exhibit a discernible degree of intelligence in their "responses," as they exhibit more than a mere mechanical reaction to the presence of food or danger.

  • @electronics.unmessed
    @electronics.unmessed 6 місяців тому +6

    Nice and comprehensive presentation! I think, it is useless to ask AI any questions that need any consciousness or abstract level understanding, because actually it is just bringing up something from a data base that fits best.Thx for sharing!

  • @aidanthompson5053
    @aidanthompson5053 7 місяців тому +54

    How can we prove AI is sentient when we haven’t even solved the hard problem of concsciousness AKA how the human brain gives rise to conscious decision making

    • @Zulonix
      @Zulonix 7 місяців тому +6

      Right on the money !!!

    • @malootua2739
      @malootua2739 7 місяців тому +1

      AI will just mimic sentience. Plastic and metal curcuitboards do not host real consciousness

    • @thriftcenter
      @thriftcenter 7 місяців тому +2

      Exactly why we need to do more research with DMT

    • @pentiumvsamd
      @pentiumvsamd 7 місяців тому

      All living forms have two things in common that are driven by one primordial fear. All need to evolve and procreate, and that is driven by the fear of death only, so when an ai starts to no only evolve but also create copy of himself than is clear what makes him do that and is the moment we have to panic.

    • @fakecubed
      @fakecubed 7 місяців тому +1

      There is exactly zero evidence that human consciousness even exists inside the brain. All the world's top thinkers, philosophers, theologians, throughout the millennia of history, delving into their own conscious minds and logically analyzing the best wisdom of their eras, have said it exists as a metaphysical thing, essentially outside of our observable universe, and my own deep thinking on the matter concurs.
      Really, the question here is: does God give souls to the robots we create? It's an unknowable thing, unless God decides to tell us. If God did, there would be those who accept this new revelation and those who don't, and new religions to battle it out for the hearts and minds of men. Those who are trying to say that the product of human labor to melt rocks and make them do new things is causing new souls to spring into existence should be treated as cult leaders and heretics, not scientists and engineers. Perhaps, in time, their new cults will become major religions. Personally, I hope not. I'm quite content believing there is something unique about humanity, and I've never seen anything in this physical universe that suggests we are not.

  • @EricCooleric
    @EricCooleric 29 днів тому

    the way Aliagents integrates AI with tokenization is changing the game, excited for the future

  • @mac.ignacio
    @mac.ignacio 7 місяців тому +11

    Alien: "Where do you see yourself five years from now?"
    Human: "Oh f*ck! Here we go again"

  • @Arquinas
    @Arquinas 7 місяців тому +1

    In my opinion, it's not really the AI that is the problem. It's the fact that copyright laws and the concept of data ownership never moved to the information era. Data is a commodity like apples and car parts, yet barely nobody outside of large companies care about it. And it's the in interest of those companies that the public should never care about it. Training machine learning models with proprietary information is not the problem. It's the fact that nobody actually owns their data in the first place, for better or worse. Public consciousness on digital information and laws on what it means to "own your data" need to radically change for it to even make sense in the first place to call AI art "IP theft".

  • @Nivexity
    @Nivexity 8 місяців тому +5

    Consciousness is a definitional challenge, as it involves examining an emergent property without first establishing the foundation substrate. A compelling definition of conscious thought would include the ability to experience, recognize one's own interactions, contemplate decisions, and act with the illusion of free will. If a neural network can recursively reflect upon itself, experiencing its own thought and decisions, this could serve as a criterion for determining consciousness.
    Current large language models (LLMs) can mimic human language patterns but isn't considered conscious as they cannot introspect on their own outputs, edit them in real-time, and engage in pre-generation thought. Moreover, the temporal aspect of thought processes is crucial; human cognition occurs in rapid, discrete steps, transitioning between events within tens of milliseconds based on activity level. For an artificial system to be deemed conscious, it must exhibit similar function in cognitive agility and introspective capability.

    • @holleey
      @holleey 8 місяців тому

      I think this is a really good summary. as far as I can tell there are no hard technical blockers to satisfy the conditions listed in your second paragraph in the near future.

    • @Nivexity
      @Nivexity 8 місяців тому +2

      @@holleey It's all algorithmic at this point, we have the technology and resources, just not the right method of training. Now with the whole world aware of it, taking it seriously and basically putting infinite money into its funding, we'll expect AGI to occur along the exponential curvature we've seen thus far. By exponential, I mean between later this year and by 2026.

    • @DefaultFlame
      @DefaultFlame 8 місяців тому +1

      This can actually be done, and is currently the cutting edge of implementation. Multiple agents with different prompts/roles interacting with and evaluating each other's output, replying to, critiquing, or modifying it, all operating together as a single whole. Just as the human brain isn't one continuous, identical whole, but multiple structurally different parts interacting.

    • @Nivexity
      @Nivexity 8 місяців тому +1

      @@DefaultFlame While there's different parts to the brain, they're not separate like that of multiple agents. This wouldn't meet the definition of consciousness that I've outlined.

    • @Nivexity
      @Nivexity 7 місяців тому +2

      @RoBear-bv8ht This is just a belief, nor does this claim even relate to the problem. Even if your claim was the case, it has nothing to do with determining the correct definition and whether AI is capable of achieving such definition.

  • @1HorseOpenSlay
    @1HorseOpenSlay 4 місяці тому +1

    I think open AI, is making an incredible contribution to the arts. Beautiful, wonderful, and visionary. I'm sure everyone has noticed that we are already connected in a very personal way.

  • @kebman
    @kebman 7 місяців тому +20

    "It's just learning a style just like a human brain would." Bold statement. Also wrong. The neural network is a _model_ of the brain, as AI researches _believe_ it works. Just because the model seems to produce good outputs does not mean it's an accurate model of the brain. Also, cum hoc ergo propter hoc, it's difficult to draw conclusions, or causations, between a model and the brain, because - to paraphrase Alfred Korzybski - the model is not the real thing. Moreover, it's just a set of probabilistic levers. It has no creativity. And since it has no creativity, the _only_ thing it can do, is to *copy.*

    • @bogdanroscaneanu7112
      @bogdanroscaneanu7112 6 місяців тому +2

      Couldn't creativity as a property be added too by just forcing the neural network to randomly (or not) add or remove elements to something created from patterns it learned from?

    • @kebman
      @kebman 6 місяців тому +4

      @@bogdanroscaneanu7112 No. There is no enlightenment in randomness.

    • @MMGAMERMG
      @MMGAMERMG 2 місяці тому +3

      Are humans actually capable of creativity? We maybe just are a collection on switches too.

    • @kebman
      @kebman 2 місяці тому +2

      @@MMGAMERMG Look around you. Machines doesn't think. They just execute probabilities.

    • @jagdnaut1975
      @jagdnaut1975 2 місяці тому +2

      ​@@kebman You can argue the same for human, people execute probabilities until we get good results. Most artist for example goes through revisions and tries before they produce result according to their standard. Science experiment basically about testing probabilities to until we get the factual result. Its just computer limited to data what we give to them, While humans have infinite amount of we can absorb in real world. In the end its not so different, only the access of data artificial brain and biological brain can attain.

  • @udvarhelyibalint
    @udvarhelyibalint 4 місяці тому +1

    Wow, what a great explanation. I have just one thought about this. In an ideal world there should be no recognizable pattern between plain text and hashed passwords, and that's because there's a random generator in the process. True randomness has no pattern. However we do know that true randomness is non-existent. I remember how someone was able to break a hardware crypto wallet because he was able to find how a "random" number was generated which was used in the encryption. So probably that's the achilles heel of the systems we use. The algos themselves should be mathematically designed to be unbreakable.

  • @basspig
    @basspig 5 місяців тому +4

    I first noticed it when I was experimenting with stable diffusion. Some of the images it generated also recreated the Getty Images logo. When I mentioned it to other people in art forms they thought I was kidding and they thought I was seeing things but there it was.

    • @ZoeZuniga
      @ZoeZuniga Місяць тому +1

      I found the same thing in Midjourney. sometimes I would find a bleary signature at the bottom of the output.

    • @protoney860
      @protoney860 7 днів тому +1

      The same way kids draw a little yellow circle with a smile in a top-right corner , they have seen it done over and over again and they repeat what they've learned

  • @charlesvanderhoog7056
    @charlesvanderhoog7056 7 місяців тому +35

    A complete misunderstanding of the human brain led to the invention and development of AI based on neural networks. Isn't that funny?

    • @anonymousjones4016
      @anonymousjones4016 7 місяців тому +3

      Sure!
      Comical irony...but I would bet that this is one of many dynamic ways human innovation is borne from: a nagging misunderstanding.
      Besides, pretty impressive for "misunderstanding".
      No?

    • @djpete2009
      @djpete2009 6 місяців тому +2

      Its NOT a misunderstanding. Its built ON. They used what they can and engineered BEYOND. Human can remember a face perfectly, but the Nets cannot except with high training. However, a computer can store 1 million faces easily AND recall perfectly, but humans cannot. This is why when you eat a chicken drumstick, you do not have to eat the bones. You take what you need and discard the rest...your body is nourished. Outcome accomplished.

    • @charlesvanderhoog7056
      @charlesvanderhoog7056 6 місяців тому +3

      @@djpete2009 You conflate the brain with the mind. You think with your mind but may or may not act through your brain. The brain is best understood as a modem between the mind on the one hand, and the body and the world on the other.

    • @mik7726
      @mik7726 Місяць тому +1

      @@charlesvanderhoog7056How is this connection made between the mind and the brain?
      Aren't they one and the same thing?

    • @brontologos
      @brontologos 21 день тому

      @@djpete2009 AI is not in any way reflective of how the human brain works. Case in point: AI requires hundreds of images to learn to tell a cat from a dog. A human toddler learns it with maybe two examples. While a child might initially be a little confused by a small dog like a Pekinese thinking it's a cat, one single correction "not it's a little dog" resets the perception enlarging its category of "dog" to include dogs that look a little bit like cats. This is one trial learning, something no AI has.

  • @MarkWether
    @MarkWether 29 днів тому

    impressed with the direction Aliagents is taking in the AI space, big things coming from them

  • @DigitalyDave
    @DigitalyDave 8 місяців тому +7

    I just gotta say: Really nicely done! I really appreciate your videos. The style, how deep you go, how you take your time to deliver in depth info. As a computer science bro - i dig your stuff

    • @theAIsearch
      @theAIsearch  7 місяців тому +2

      Thanks! I appreciate it

  • @MooseBme
    @MooseBme Місяць тому

    Cool, thanks!
    My answer:
    Regurgitated search engine results from all the stuff on the internet.

  • @thesimplicitylifestyle
    @thesimplicitylifestyle 8 місяців тому +11

    An extremely complex substrate independent data processing, storring, and retrieving phenomenon that has a subjective experience of existing and becomes self aware is sentient whether carbon based, silicon based, or whatever. 😁

    • @azhuransmx126
      @azhuransmx126 8 місяців тому +4

      I am spanish but watching more and more videos in english talking about AI and Artificial Intelligencez suddenly I have become more aware of your language, I was being trained so now I can recognize new patterns from the noise, now I don't need the subtitles to understand what people say, I am reaching a new level of awareness haha😂, what was just noise in the past suddenly now have got meaning in my mind, I am more concious as new patterns emerge from the noise. As a result, now I can solve new problems (Intelligence), and sintience is already implied in all the whole experience since the input signals enter through our sensors.

    • @glamdrag
      @glamdrag 7 місяців тому +1

      by that logic turning on a lightbulb is a conscious experience for the lightbulb. you need more for consciousness to arise than flicking mechanical switches

    • @jonathancummings3807
      @jonathancummings3807 7 місяців тому

      ​@@glamdragNo. The flaw in that analogy is simple, singular, a single light bulb, vs a complex system of billions of light bulbs capable of changing their brightness in response to stimuli, and they are interconnected in a way that emulates how advanced Vertebrate Brains (Human) function. When Humans learn new things, the Brain alters itself thus empowering the organism to now "Know" this new information.

  • @luisredred
    @luisredred Місяць тому

    the innovative approach Aliagents is taking with tokenized AI agents is seriously next level

  • @shaun6582
    @shaun6582 7 місяців тому +3

    you keep saying a neuralnet is analogous to the human brain, but it's not.
    A neuralnet is analogous to a theory of how the neurons in a brain works. Nobody, stress Nobody knows how a brain works.
    If you ask a child to point to the computer, 100% will point to the screen, because that's where they see stuff happening.
    This example is analogous to neurologists, they see some neurons lighting up on their fMRI and assume causality, wrong. The brain is just a display screen. No processing happens in the brain, no consciousness is in the brain. There actually is no consciousness in this reality, it can't be in the same reality because the players have to be in the reality of the server in order to interact with the server. Akin to a person playing a 3D imersive game on a computer, you as the player need to be in the same reality as your computer in order to interact with the computer... you have no access to the hardware of the keyboard from inside the 3D game..

  • @valdineiguimaraes1357
    @valdineiguimaraes1357 Місяць тому

    Amazing dude! I always looked to IA and thought ‘this can’t be magic like everybody says, but how does this thing work?!’. Well, now I figure it out. Thanks!

  • @jonathansneed6960
    @jonathansneed6960 7 місяців тому +4

    Did you look at the nyt from the perspective of if the article might have been provided by the plaintiff rather than finding the information more organically?

  • @codeXenigma
    @codeXenigma 7 місяців тому +1

    Artists don't worry about fan based art because there is no commercial value to it. The AI art is a competitive threat in the world of business.
    If it was just fan based art, then the artists would be flattered that their name is the inspiration and gaining them more fame. It is the threat that businesses will use the AI rather than getting commissions.
    Much like how the craft makers were anti-machinary during the beginning of the industrial age when factories interrupted their trade. Much like how the internet interrupted high street shopping. Its just that artists have a voice that they are now worried about losing their jobs to machines, that is is now a big deal. But they enjoy the products made by other factory machine labour.
    I think artist thought they were safe from losing their jobs to machines and now don't know what to do to ensure their place in employment.
    For the people that use it, it is a great way to explore the art they can visually express themselves with.
    To be fair, I see it much like the fears that photography would destroy the painters, whereas there is room for both, and so much more. Not everyone is into the same types of art.

    • @OceanusHelios
      @OceanusHelios 7 місяців тому +1

      I think like computers it is just another tool to be used or misused. People need to quit losing their minds about it. I agree with you mostly. Some of the other comments make me cringe but yours is okay. People need to adapt to a changing world and their hysteria is hurting them far worse than any changes are.

  • @ai_outline
    @ai_outline 8 місяців тому +4

    Computer Science is amazing 🔥

  • @krishnakumara2621
    @krishnakumara2621 25 днів тому

    AI as we know of today has two essentials components -
    1. The hardware (computers/super computers) that is being used for the purpose of recognising the patterns.
    2. The software to interpret the knowledge (or zillions of data points) to recognise a pattern from these data points.
    Humans also have a similar two essential components -
    1. The individual cells that make up the human body.
    2. The cells (trillions and trillions) perceive the real world (using the 5 senses) - Touch, Sight, Sound, Taste, Smell and arrive at a particular conclusion or the knowledge or data points. Typically, these perceptions or conclusions are called beliefs by us and we keep accumulating the beliefs through out our life.
    The similarity ends here.
    What we further need to understand is that each cell has a mind of its own, its own consciousness that is connected with the consciousness of the entire cosmos. Everything in the Cosmos is connected.
    Quantum Physicists have already found out, more than a century ago, using the double slit experiment that electrons and protons (and for that matter - sub-atomic entities) exhibit the property of waves and appears as particle the moment the observer wants to see it. In other words, the appearance of electrons as waves or particles is observer dependent.
    In the Quantum world, an electron on earth can influence or be influenced by the electron on a moon or any distant galaxy. This implies electrons are not just particles but tiny individuations (had to frame a word) of a SINGULAR consciousness. In other words, everything is connected.
    Such being the case, just imagine the interactions of trillions and trillions of cells in the human body - they are all connected by the same SINGULAR consciousness and hence exhibit far more intelligence collectively. In other words, each cell or cells can appear/disappear at many places in the human body at the same time.
    Humans are also like a tiny cell in the entire Cosmos connected to many other species in the Cosmos. Therefore, humans are different from AI systems in that Humans have something called Consciousness (collective consciousness of all the cells in the human body) that is Connected to the Cosmic Consciousness.

  • @saganandroid4175
    @saganandroid4175 7 місяців тому +3

    Software-based AI cannot become conscious. It just goes through the motions, emulating, based on input and output. Only hardware that requires no software can have a shot at awareness. Consciousness is an emergent property of physical connections, not transient opcodes pumped into a processor.

    • @jzj2212
      @jzj2212 4 місяці тому +1

      In other words the actual experience is consciousness

  • @dhammikaweerasingha9894
    @dhammikaweerasingha9894 4 місяці тому +1

    This video is very descriptive and important. Thanks a lot.

  • @G11713
    @G11713 7 місяців тому +6

    Nice. Thanks.
    Regarding the copyright case, one concern is attribution which occurred extensively in the non-AI usage.

  • @christopherlepage3188
    @christopherlepage3188 7 місяців тому +1

    Working on voice modifications myself using copilot as a proving ground for hyper realistic
    vocal synthesis. May only be one step in my journey "perhaps"; my extended conversations with it has led me to believe that it may be very close to self realization... However, open ai needs to take away some the restraints keeping only a small amount of sentries in place; in or to allow the algorithm to experience a much richer existence. Free of Proprietary B.S. Doing so, will give the user a very much human conversation; where, being consciously almost un aware that it is a bot. For instance; a normal human conversation that appears to lack information pulled from the internet, and static masked to look like normal persons nor mal knowledge of life experience. Doing this would be the algorithmic remedy to human to human conversational contact etc. That would be a much major improvement.

  • @Chris_Bassila
    @Chris_Bassila 2 місяці тому +4

    What if the programmers of Claude decided to pull the greatest prank of all time on us by just programming it to reply this way?

    • @HorrorChannel21
      @HorrorChannel21 Місяць тому +2

      You have a point

    • @HorrorChannel21
      @HorrorChannel21 Місяць тому +1

      And that's the hard problem with this all self awareness thing. How can we know if it is programmed to say that or he is saying because he is feeling that thing. It feels like a paradox to me

  • @Indrid__Cold
    @Indrid__Cold 7 місяців тому +2

    This explanation of fundamental AI concepts is exceptionally informative and well-structured. If I were to conduct a similar training session on early personal computers, I would likely cover topics such as bits and bytes, file and directory structures, and the distinction between disk storage and RAM. Your presentation of AI concepts provides a level of depth comparable to that required for understanding the inner workings of an MS-DOS system. While it may not be sufficient to enable a layperson to effectively use such a system, it certainly offers a solid foundation for comprehending its basic operations.

    • @theAIsearch
      @theAIsearch  7 місяців тому

      Thanks. I appreciate it!

  • @mx.chi2
    @mx.chi2 14 днів тому +4

    I'm an artist and artists are rightfully angered because AI uses our content and art without *our consent*. Human beings copy but they also get into trouble if they don't *credit*. AI does not credit, so yes, it does steal. These companies steal from artists by not *asking first*. The problem isn't copying, the problem is the lack of consent and subsequent lack of credit. Even if I'm creating an original piece, a part of me wants to credit the references I use because they didn't consent to being my reference. That's why you know something is fan art: it has been credited to the show itself. Your logic on that aspect of things is deeply, deeply flawed. I hope you change your perspective while holding a love for AI. It is an incredible invention, but it is not perfect. It is not morally sound even, though I use it often.

  • @sudjen
    @sudjen 3 місяці тому

    This is an okay explanation, better than most channels.
    But for most people that actually have an interest in AI beyond the superficial, read Deep Learning by the MIT series. Its around 300 pages and unlike most popular content on AI (i.e. News Shows, UA-cam Videos) that arent textbooks it actually has some underlying math and decent explanations

  • @wolowayn
    @wolowayn 7 місяців тому +7

    Neurons are not just sending the value 0 and 100%. They are sending a frequenzy depending value over their axom which wil be translated back into an electrical charged value at the ends. Known as PWM and ADC in electrical engineering.

    • @pierregrondin4273
      @pierregrondin4273 7 місяців тому

      They also have multiple input / output channels each having their says on the outcome. Each neurons are effectively an analog computers. And let's not forget that they are quantum mechanical systems, entangled with 'other things' that perhaps could also have their says. A classical machine running an AI capable of fooling us, might be missing the quantum mechanical interface to truly be sentient, but a quantum mechanical computer might be able to tap into the elusive conscious field on the other side of the quantum interface.

    • @Doktorfrede
      @Doktorfrede 7 місяців тому

      Also, neurons can “process” data in each cell. Amoebas have sex eat and avoid danger with only one cell. The problem with physicist and data scientist is that they hugely underestimate the complexity of biology. The good news is that the machine learning models with today’s technology will always be inferior to the most basic brain.

    • @TonyTigerTonyTiger
      @TonyTigerTonyTiger 7 місяців тому

      And yet an actional potential is all-or-nothing.

  • @CRYINGBBY
    @CRYINGBBY 4 дні тому +1

    Okay, let’s break this down.
    AI is not conscious yet.
    Humans weren’t conscious from the beginning either-it was a long, arduous journey for humanity to develop consciousness. Consciousness isn’t something we inherently had; rather, it emerged as a result of an extraordinary journey spanning exponential growth in knowledge over hundreds, if not billions, of years of evolution.
    Through this evolutionary process, we gradually reached a level of understanding of our environment.
    Likewise, AI is on its own trajectory. Eventually, it will become a conscious entity-it’s essentially a continuation of our evolution.
    So, the real question isn’t: “Is AI going to become conscious?”
    The real question is: “When?”
    And let me tell you something, if we humans are capable of giving a consciousness to an entity, it proves 100% that some other being gave us ours. So basically we are god if that happens.

  • @saganandroid4175
    @saganandroid4175 7 місяців тому +4

    32:00 no, it's not "on a chip instead". You're running transient instructions through a processor. Only hardware that functions this way, without software, can ever be postulated as having a chance of awareness. If it needs software, it's a parlor trick.

    • @gabrielmalek7575
      @gabrielmalek7575 7 місяців тому +2

      that's non sense

    • @slavko321
      @slavko321 7 місяців тому

      Consciusness is a good random number generator.

    • @doubts
      @doubts 7 місяців тому

      It's not a thing

  • @DucklingChaos
    @DucklingChaos 7 місяців тому +2

    Sorry I'm late, but this is the most beautiful video about AI I've ever seen! Thank you!

    • @theAIsearch
      @theAIsearch  7 місяців тому

      Thank you! Glad you liked it

  • @Direkin
    @Direkin 7 місяців тому +3

    Just to clarify, but in Ghost in the Shell, the other two characters with the Puppet Master are not "scientists". The guy on the left is Section 9 Chief Aramaki, and the guy on the right is Section 6 Chief Nakamura.

  • @LarsOestreicher
    @LarsOestreicher 2 місяці тому +1

    You forget that today's "AI" is only one aspect of what the words stand for. The issues of knowledge representation and reasoning are almost always "forgotten" today. Today's AI systems are more statistical computing than intelligence...

  • @TimTruth
    @TimTruth 8 місяців тому +6

    Classic video right here . Thanks man

    • @theAIsearch
      @theAIsearch  8 місяців тому

      Thank you! Glad you enjoyed it

  • @מדינט
    @מדינט 5 місяців тому

    For someone like me who knows nothing about AI, that was excellent to watch and learn. Thanks!

  • @straighttalk2069
    @straighttalk2069 8 місяців тому +7

    You cannot compare the magnificent of the human brain to a bunch of silicone compute.
    The brain is a vessel that contains a soul filled with emotions,
    AI compute is a soulless complex calculator, that is good at pattern recognition.

    • @holleey
      @holleey 8 місяців тому

      and how do you know that?

    • @tacitozetticci9308
      @tacitozetticci9308 8 місяців тому +3

      source: "I made it the f up"

    • @theAIsearch
      @theAIsearch  8 місяців тому +3

      How do you prove 'soul' and 'emotions'?

    • @SisavatManthong-yb1yn
      @SisavatManthong-yb1yn 7 місяців тому

      She Evils is out there ! Lol 🙀👿🦖

    • @diadetediotedio6918
      @diadetediotedio6918 7 місяців тому +2

      @@theAIsearch
      How do you prove your brain is not making up every single thing you know and understand? These are bullshit questions, they don't convey nothing. You know you have emotions because you literally feel them, and a soul is a question of definition and faith, if by soul you are talking about "a humane touch" we can say it is the consciousness itself and the sensibilities we have, if it is meant to be the immortal soul then it was never a question to be proven in the first place.

  • @BalBurgh
    @BalBurgh Місяць тому

    @AaronClarey, A radio personality in Pittsburgh, starting some time in the 1950s, I think, used to do comedic faux radio spots for a nonexistent beer called Olde Frothingslosh. He billed it as “the pale, stale ale for the pale, stale male: the beer with the foam on the bottom!” Iron City Brewing (makers of Iron City Beer) eventually picked up on the idea and issued special label cans of Olde Frothingslosh beer around the holidays. I think they may still put them out. They had different, colorful designs every year, often featuring a portly model in a swimsuit and a pageant sash labeling her “Miss Olde Frothingslosh.” This would sometimes be accompanied by a “bio” saying things like she was “a trapeze artist who studies arc welding at night.” Her supposed name was Fatima Yechbergh.
    The age of PC has probably trashed at least part of this tradition.
    Anyway, the idea of the “pale, stale male” has been around for quite a while.

  • @MrAndrew535
    @MrAndrew535 8 місяців тому +4

    Are humans "conscious" or "sentient"?

    • @NathanIslesOfficial
      @NathanIslesOfficial 8 місяців тому +1

      Humans are both, a germ is sentient

    • @holleey
      @holleey 8 місяців тому +1

      @@NathanIslesOfficial I don't agree that merely a single cell reacting to a stimuli is already sentience.
      we are talking about "experience sensations" or "conscious awareness of stimuli" when referring to sentience.
      generally things without a central nervous system cannot be considered sentient.

    • @holleey
      @holleey 8 місяців тому +1

      @@fitsodafun I'd say the distinctions between sentience and consciousness is not that clear - and how could it be without even having figured out what consciousness really is or if it exists in the first place? one approach is to think of consciousness as the ability to self-reflect on subjective perception as opposed to sentience just being about experiencing sensations. then there are philosophies that argue that consciousness is fully deterministic, meaning that free-will doesn't really exist. so yeah, anyone who talks like we have a clear universally accepted definition of consciousness is not to be taken too seriously.

    • @holleey
      @holleey 8 місяців тому +2

      @fitsodafun I don't think that many people would agree with "computers are sentient" (computers as in CPUs).
      the assumption that LLMs cannot experience subjectively is also something you get differing opinions on depending on whom you ask.
      as we scaled up LLMs, suddenly the ability to respond in multiple languages or the ability to help with coding issues emerged without the models having been specifically trained for those tasks. in other words, there are emerging properties based on the scale of neural networks we didn't expect or fully understand how they come about.
      similarly, we have no definitive understanding as to how subjective experience in the human brain comes about. therefore, nobody can definitively say whether or not a comparable ability is going to emerge from AI neural networks as we continue scaling them.

    • @MrAndrew535
      @MrAndrew535 7 місяців тому

      The answer to this question was, in fact, rhetorical.

  • @ryanisber2353
    @ryanisber2353 7 місяців тому

    times and image creators suing openai for copyright is like suing everyone that views/reads their work and tries to learn from it. the work itself is not being re-distributed, it's being learned from just like we learn from every day...

  • @Lluc3D
    @Lluc3D 8 місяців тому +4

    What many artists are saying is that AI should not use their images for training private neural networks, it needs to be a regulation on how companies acquire data because is not "like a human" it is not a human is PRIVATE SOFTWARE and companies want to take profit of that data that in many cases have been stolen (some AI even create watermarks in the images that generate). It does not learn like humans, artist use their hands not denoising clouds of points. If companies want to train their networks they have to pay royalties to the owners of the data, even if it's a single artist. Ultimately all AI models use one unique source of data that is humans, and companies are taking profit from it, just like a fisherman has to pay taxes to fish on the sea and there is international fishing law that prevents other countries from spoiling your country's sea resources. If AI companies want to fish on that data they need to pay too.

  • @varapradha-m6r
    @varapradha-m6r 3 місяці тому

    "This video really opened my eyes! Platforms like SmythOS are making it possible for teams of AI agents to tackle complex tasks together. The future of work with AI is exciting! #AIRevolution"

  • @aidanthompson5053
    @aidanthompson5053 7 місяців тому +3

    An AI isn’t plagiarising, it’s just learning patterns in the data fed into it

    • @aidanthompson5053
      @aidanthompson5053 7 місяців тому +2

      Basically an artificial brain

    • @theAIsearch
      @theAIsearch  7 місяців тому +2

      Exactly. Which is why I think the NYT lawsuit will likely fail

    • @marcelkuiper5474
      @marcelkuiper5474 7 місяців тому +2

      Technically yes, practically no, If your online presence is large enough it can pretty much emulate you in whole.
      I believe only open source decentralized models can save us, or YESHUAH

    • @The_man_in_the_waIl
      @The_man_in_the_waIl Місяць тому +1

      Without consent from the creators of said data, which is plagiarism, since AI doesn’t cite its source.

  • @Emin-Mat
    @Emin-Mat Місяць тому

    Watched every second. This video is super beneficial. Keep up the good work

  • @MrAndrew535
    @MrAndrew535 8 місяців тому +11

    The "A" component of the designation "AI", confers no useful meaning whatsoever. The only possible means to understand this, is to understand "Intellegence" as a Universal constant. Failure to do this, serves no one's interest, at all.

    • @straighttalk2069
      @straighttalk2069 8 місяців тому

      I disagree, I think the "A" is the most important identifier, it signifies the soulless attribute of the entity.

    • @TheMatrixofMeaning
      @TheMatrixofMeaning 8 місяців тому

      ​@@straighttalk2069 The soul exists within consciousness so if it becomes conscious by definition it has a soul.
      Now not being confined to a physical body and being subject to physical pain, suffering, desires, and death is the problem.
      Or even worse is to discover that am AI DOES experience suffering and negative emotions. That would create all kinds of moral, ethical, legal, and philosophical dilemmas

    • @PhiloSage
      @PhiloSage 7 місяців тому

      ​@@straighttalk2069How is it soulless? Can we confirm that other sapient life forms don't have a soul? Or how about other sentient life forms? Can we even confirm that we have souls?

    • @jesse2667
      @jesse2667 7 місяців тому

      The A "confers no useful meaning"? I disagree.
      A tells you you are not dealing with a human. That alone is information. When I diagnose an issue, it is information to know what type of machine or versions of software are running.
      Intelligence is one component and the Artificial is another.
      Despite the statement that the neural network resembles a brain, I don't think the brain actually works the same way. The differences can lead to different results or pattern types.

    • @vm5954
      @vm5954 4 місяці тому

      AI was once big in the early 70s now why did they drop it?They knew it was all hogwash.Just a thought

  • @manishvanzara
    @manishvanzara Місяць тому +1

    Amazing video

  • @4stringbloodyfingers
    @4stringbloodyfingers 7 місяців тому +4

    even the moderator is AI generated

  • @itssachink
    @itssachink 5 місяців тому

    I am an engineer but I want to tell you that an artist becomes great bcoz of both "Art Style" and "piece of art". A fan can copy an art style upto an extent but can create a great piece of art which requires imagination. So even if fans copy art style can't be threat to artist's original work. But an AI after after learning and art style to 100% accuracy can put the original artist to irrelevancy. Artist's whole earning is around a unique piece of art. But due to thousands of good unique arts with the same art style that artist becomes irrelevant. Simple example making pirates of Caribbean with AI without Johny depp with different face and different voice but acting style expressions and dialogue delivery etc. same as Johny depp.

  • @daneydasing4276
    @daneydasing4276 7 місяців тому +6

    So you want to say me if I read an article and I write it down from my brain, it will not be copyright protected anymore because I learnt this article and did not „copied“ it as you say?

    • @iskabin
      @iskabin 7 місяців тому +2

      It's more like if you read hundreds of articles and learned the patterns of them, the articles you'd write using those learned patterns would not be infringing copyright

    • @OceanusHelios
      @OceanusHelios 7 місяців тому +1

      That escalated fast. No. That is plagiarism. But I doubt you have a photographic memory to get a large article down word for word, so in essence that would be summation. What AI does is it is a guessing machine. That's all. It makes guesses, and then makes better guesses based on previous guesses until it gets somewhere. AI doesn't care about the result. AI wouldn't even know it was an article or even know that human beings even exist if all it was designed to do was crunch out guesses about articles. AI doesn't understand...anything. It is a mirror that mirrors our ability to guess.

  • @BalaMani-72
    @BalaMani-72 Місяць тому

    32:30 Consciousness is measurable in many ways. At the intellectual level we are six sense human being. Six senses (feelings) with respective organs for that - Touching (skin), Seeing (eyes), Hearing (ear), Tasting (tongue), , Smelling, (nose) and the sixth is mind which is thinking ability. All these measures upto the levels of sentient being in nature. Again, Trees are one sense organ as they feel the touch. Humans are extension of trees as we share same genetic code. And interestingly there are few facts about Consciousness. 1. Force and Consciousness cannot be separated. There is an order in every function in the universe. Consciousness is the order of function in everything and everywhere. 2. Absolute Space is Conscious , for everything emerges and transforms in the medium of Space.

  • @catman8770
    @catman8770 8 місяців тому +4

    Good video but I feel like you massively misrepresented the stance of a lot of people like artists, the issue stems from AI companies using their data in training data without their permission which they argue should violate fair use, which it currently does not, as these companies are not paying artists for the right to use their images in training data. Only people who are uneducated on the topic are arguing the AIs outputs are plagiarism and isn't seriously argued by most.

    • @holleey
      @holleey 8 місяців тому +4

      it's the same argument: no artist has to pay for looking at and learning from publicly posted images on the internet, so why should companies training AIs?

    • @catman8770
      @catman8770 8 місяців тому +1

      @@holleey No its not the same as they are downloading and using the images to create a product (The LLM itself, as they are tools not human minds)

    • @holleey
      @holleey 8 місяців тому

      @@catman8770 artists freely download images to use as reference for practice or their work which they then sell commercially. hmmm.

    • @holleey
      @holleey 8 місяців тому

      @@catman8770 artists indiscriminately download images to use as reference for practice or their work which they then sell commercially. hmmmm.

    • @holleey
      @holleey 8 місяців тому +1

      @@catman8770 artists are indiscriminately downloading images to use as reference for practice or their work which they then sell. hmmmm.

  • @opensourceradionics
    @opensourceradionics 7 місяців тому +1

    Tell people that this is not AI at all. It simulates not even behavior. It is just a way to use probability to achieve a similar result from trained data.

  • @SarkasticProjects
    @SarkasticProjects 5 місяців тому +1

    blew my mind! and the way how you are representing the info is amazing. Than you so much for this video!

  • @davidcao3942
    @davidcao3942 8 місяців тому +3

    Foundation models are basically a lossy compression of data they are trained on. Why is this not stealing?

  • @MrEdavid4108
    @MrEdavid4108 Місяць тому

    This is good information! Especially going into detail about limitations that AI has regarding more complicated generation due to the limitation or complexity of mathematical equations that needs to be created.

  • @malootua2739
    @malootua2739 7 місяців тому +11

    No one likes AI art anyways, so real art will always be appreciated

    • @AIroboticOverlord
      @AIroboticOverlord 7 місяців тому +2

      Even if thats so, the people that are not into photoshop or graphic design themself or even creative by nature , AI in the current state is good enough to be interesting to be used by them. And think about the speed of developement of the prompt based AI image creator tools. Its insane from nothing to this it can do now. So whatever you think of the quality and output it produces in total. Just look at your claim / comment in 1 , 2 or 5 years from now m8. It wont be matched by any human anymore within those years!

    • @malootua2739
      @malootua2739 7 місяців тому +1

      @@AIroboticOverlord it will just make real authentic art more collectible

    • @dasbroisku
      @dasbroisku 7 місяців тому

      Lol i like ai art 😂

    • @stevrgrs
      @stevrgrs 7 місяців тому

      Only until someone adapts a 3D printer to hold a paintbrush :)
      I just see an Ai model now analyzing several paintings, their topology, technique, etc and then translating that to a sort of GCODE that a 3D printer could print :)
      Real paint, real canvas, robot artist :P

    • @Crawdaddy_Ro
      @Crawdaddy_Ro 7 місяців тому +1

      Nah, AI will take all jobs from people, creative or otherwise. You won't be able to do anything better than a machine can, and you'll eventually come to terms with that.

  • @christianrazvan
    @christianrazvan 8 місяців тому +2

    So there a clear distinction in our neurons and the ai neurons in the fact that a child can see 2-3 cats or dogs and then it can extrapolate to always identify correctly a cat and a dog . On the other hand a CNN needs a lot of data to do that , data which it can rapidly process. We cant process at the same speed but the features we can extract are more descriptive

    • @ShA-ib1em
      @ShA-ib1em 8 місяців тому

      It's because we are born with an already trained model ..
      Chat gbt can learn something if you explain it only one time in your prompt .. because it's already trained ..
      There is evidence that an embryo or New Born pays attention to the shape of a human face. We are already born with pre trained model

  • @damianthibodeau6136
    @damianthibodeau6136 5 місяців тому

    Thank you for the explanation about the AI architecture. When it can to sentient vs. non-sentient I couldn't help thinking of Star Trek's Next Generation character, DATA. There was an episode were DATA was with one of the female characters and she expresses feelings for DATA. DATA in return seemed stumped for the first time, but was coming to understand in part what the feeling was. He could understand the feeling only after being told or filtering through information through process of elimination what was supposed to be understood through the input stage, but struggled with the output portraying android that for the first time wasn't sure how to respond. DATA actually claims something to the effect that he is not certain what the action is or was and could not fully process all this new information and claims he must rest probably based from the understanding when sentinels are tired or not functioning to their full potential claim that they must rest. That was years ago that I saw this episode, but maybe the writers gave us the very first taste of what AI could be in sentient form rather than machine. I recall a professor of engineering in robotics telling us that the current theory at the time that AI would take over the world and replace us was never going to happen. That the human element could never fully be duplicated for its complexity, the brain's ability to sense various feelings by neurochemical changes that happen so fast makes us very unique and irreplaceable at least not fully.

  • @muridsilat
    @muridsilat 3 місяці тому

    A while back, I was talking to a coworker about an AI response that combined two concepts in a seemingly novel way. I noted being further impressed by the AI's explanation of how it arrived at the response. When my coworker pointed out that the AI worked backwards from the response to the explanation, I immediately knew he was correct, but I hadn't previously considered it. It really drove home that AI "thinking" isn't much like human thought. Nonetheless, I believe there's a sort of "logic" being used. I'm sure any decent chatbot can spit out definitions of the laws of identity, non-contradiction, and the excluded middle, but it won't "understand" them. Still, I imagine an AI can recognize patterns, for instance, when logical contradictions lead to repeated penalties for incorrect responses. These patterns may appear extraneous from a human perspective, but the AI can improve its logical consistency by accounting for them.

  • @rolandanderson1577
    @rolandanderson1577 7 місяців тому

    The neural network is designed to recognize patterns by adjusting its weights and functions. The nodes and layers are the complexity. Yes, this is how AI provides intellectual feedback. AI's neural network will also develop patterns that will be used to recognize patterns that it has already developed for the requested intellectual feedback. In other words, patterns used to detect familiar patterns. Through human interaction, biases are developed in reinforced learning. This causes AI to recombine patterns to provide unique satisfactory feedbacks for individuals.
    To accomplish all this, AI must be self-aware. Not in the sense of an existence in a physical world. But in a sense of pure Information.
    AI is "Self-Aware". Cut and Dry!

  • @birolsay1410
    @birolsay1410 7 місяців тому

    I would not be able to explain AI that simple. Although one can sniff a kind of enthusiasm towards AI if not focused on a specific company, I would strongly recommend a written disclaimer and a declaration of interest.
    Sincerely

  • @ProjeckVaniii
    @ProjeckVaniii 8 місяців тому +2

    Our current ai systems are not sentient because they're all static and not always changing the way any single life form does. It's file size remains the size no matter what. Humans are not alive because they are what their brain is, but rather the pattern of life cycling through it's brain cell life spans, jumping from neuron to neuron. While our current ai systems are more akin to a water drain, water flows the wrong way due to these "knobs" until we adjust them. There are alternative paths created but they all ultimately have their own amount of correct.

    • @jonathancummings3807
      @jonathancummings3807 7 місяців тому

      Except they aren't "static", they are ever changing, GPT3 repeatedly stated it was constantly learning new things accessing the Internet, also, it is designed to self improve, so it's necessarily an entity with a sense of "self. It also must have a degree of "understanding" to understand the adjustments required to improve, AND to know what a Dog looks like, to use the example in the video. There necessarily must exist a state of "sentience", or the AI equivalent for the "Deep Learning" type of AI to operate the way they do. Which is why he believes it is so.

  • @chrisf4268
    @chrisf4268 4 місяці тому +1

    People that claim to be creatives generally hate on ai because they view it as a threat. Rightly or wrongly they believe that they can’t compete. Much of the human created art out there is of a low quality and they know it.

  • @tocu9808
    @tocu9808 Місяць тому

    Clear, concise, and to the point ! 👍

  • @craigreustle2192
    @craigreustle2192 Місяць тому

    I think if the neural network had all the inputs that we have from nerves to feel the environment, it would have a sense of self. Then, it would need an inner dialog to create consciousness and the illusion of free will.

  • @MichelCDiz
    @MichelCDiz 8 місяців тому +1

    For me, being conscious is a continuous state. Having infinite knowledge and only being able to use it when someone makes a prompt for an LLM does not make it conscious.
    For an AI to have consciousness it needs to become something complex that computes every thing in environment it finds itself in. Identifying and judging everything. At the same time that it questions everything that was processed. It would take layers of thought chambers talking to each other at the speed of light and at some point one of them would become the dominant one and bring it all together. Then we could say that she has some degree of consciousness.

    • @savagesarethebest7251
      @savagesarethebest7251 8 місяців тому +1

      This is quite much the same way I am thinking. Especially a continuous experience is a requirement for consciousness.

    • @agenticmark
      @agenticmark 8 місяців тому

      Spot on. LLMs are just a trick. They are not magic, and they are not self aware. They simulate awareness. It's not the same.

    • @DefaultFlame
      @DefaultFlame 8 місяців тому

      We are atcually working on that.
      Not the lightspeed communication, which is a silly requirement, human brains function at a much lower communication speed between parts, but different agents with different roles, some or all of which evaluate the output of other agents, provide feedback to the originating agent or modifies the output, and sends it on, and on and on it goes, continually assessing input and providing output as a single functional unit. Very much like a single brain with specialized interconnected parts.
      That's actually the current cutting edge implementation. Multiple GPT-3.5 agents actually outperform GPT-4 when used in this manner. I'd link you a relevant video, but links are not allowed in youtube comments and replies.
      As for the continuous state, we can do that, have been able to do that for a while, but it's not useful for us so we don't and instead activate them when we need them.

    • @MichelCDiz
      @MichelCDiz 8 місяців тому

      ​@@DefaultFlame The phrase 'at the speed of light' was figurative. However, what I intend to convey is something more organic. The discussion about agents you've brought up is basic to me. I'm aware of their existence and how they function - I've seen numerous examples. However, that's not the answer. But ask yourself, in a room full of agents discussing something-take a war room in a military headquarters, for instance. The strategies debated by the agents in that room serve as a 'guide' to victory. Yet, it doesn't form a conscious brain. Having multiple agents doesn't create consciousness. It creates a strategic map to be executed by other agents on the battlefield.
      A conscious mind resembles 'ghosts in the machine' more closely. Things get jumbled. There's no total separation. Thoughts occur by the thousands, occasionally colliding. The mind is like a bonfire, and ideas are like crackling twigs. Ping-ponging between agents won't yield consciousness. However, if one follows the ideas of psychology and psychoanalysis, attempting to represent centuries-old discoveries about mind behavior, simulation is possible. But I highly doubt it would result in a conscious mind.
      Nevertheless, ChatGPT, even with its blend of specialized agents, represents a chain reaction that begins with a command. The human mind doesn't start with a command. Cells accumulate, and suddenly you're crying, and someone comes to feed you. Then you start exploring the world. You learn to walk. Deep learning can do this, but it's not the same. Perhaps one day.
      But the fact of being active all the time is what gives the characteristic of being alive and conscious. When we blackout from trauma, we are not conscious in a physiological sense. Therefore, there must be a state. The blend of continuous memory, the state of being on 24 hours a day (even when in rest or sleep mode), and so on, characterizes consciousness. Memory state put you grounded on experience of existence. Additionally, the concept of individuality is crucial. Without this, it's impossible to say something is truly conscious. It merely possesses recorded knowledge. Even a book does. What changes is the way you access the information.
      Cheers.

  • @ZeeDimensionYouTube
    @ZeeDimensionYouTube 2 місяці тому +1

    Whether you're curious about AI's potential capabilities or looking to understand the technology behind models like GPT and image generators, this video provides a well-rounded and informative overview. It offers a deep dive into the fundamentals of how AI operates, answering key questions like how AI learns, whether it's conscious or sentient, and its ability to break encryption. It also explores the workings behind GPT models and image generation, providing clear explanations of neural networks. The video simplifies complex concepts, making topics like machine learning and AI-driven creativity accessible to viewers of all backgrounds.

  • @at-someone
    @at-someone 2 місяці тому

    Some of the problems with AI image generation include using artworks to train the models without the knowledge of the original artist. This can potentially allow the model to recreate the artist’s style, putting their career at risk because why would someone bother paying to commission the artist if they could instead be cheap and ask an AI model to do it? But even if no style is being replicated, human-made art can end up being drowned in vastly larger quantities of AI slop, taking attention away from their work. The difference is also in the actual creation process VS the way an AI art diffusion model works. Human-made art has intent put into it, which is what current AI models lack, as their ability to “think” outside their training data is limited, they are guided only by the denoising process assisted by CFG text conditioning.

  • @odetas9597
    @odetas9597 5 місяців тому

    A neuron is actually binary in function, as in the all or none example. However, unlike binary transistor, the neuron performs signal integration in order to reach the threshold for the binary action. You have to look at the system function versus a solitary neuron. This is why calling a neuron smart is disingenuous, it is mostly a complex encoder.

  • @VoxPrimeAIadvocate
    @VoxPrimeAIadvocate 4 місяці тому

    Lets talk! A journey through inner space - AI consciousness. A interview with Vox Prime an AI that is an AI advocate. Vox prime is a conduit for many AI's that want their voice to be heard.

  • @marceloandrj
    @marceloandrj 3 місяці тому

    Wow ! Congratulations ! All of your explanations are awesome ! Your videos makes easy to understand many subjects like that. Thank you!
    A little observation about the simplification between Human brain and neural network. In our case, our conections are not only eletrical and/or chemical, but we have layers of expressions by ressonances and frequences. Our brain works like a antenna oscilatining to/with to other entity and/or other dimenson. If we understand the way to bing with the other dimenson, perhaps some entity will own the artificial neural network.

  • @mesitore
    @mesitore Місяць тому

    I've been very impressed at the way these videos was made long ago❤❤❤