The Shockingly Simple Way the BRAIN of an AI Works! It's Genius!

Поділитися
Вставка
  • Опубліковано 1 гру 2024

КОМЕНТАРІ • 589

  • @ArvinAsh
    @ArvinAsh  Рік тому +14

    Here's the link to my prior video on how AI bots like ChatGPT work: ua-cam.com/video/WAiqNav2cRE/v-deo.html - Good background to have prior to or after watching. video above

    • @LimbDee
      @LimbDee Рік тому

      Thanx, I paused at 0:48, I'll check this one first.

    • @polaris1985
      @polaris1985 Рік тому

      Please make a video how quantum computers calculate with qubits, its difficult to understand

    • @dongshengdi773
      @dongshengdi773 Рік тому

      ​@random user why not tell the AI to create more jobs ?

    • @uiteoi
      @uiteoi Рік тому

      Great video once again. Would you consider making a video on the attention mecanism at the heart of transformers ? From the 2017 paper "Attention is all you need" ?

    • @amdenis
      @amdenis Рік тому +2

      I have worked in AI for decades, from expert systems and machine learning to much more advanced, deep learning systems over the past decade. While I appreciate the fact that you are trying to allay fears by educating people, and that is a great thing. However, we also want to make sure people are properly informed and aware of what is actually happening. To that end, on a fundamental level (and I mean this respectively), you are wrong about how AI works on a fundamental level.
      First of all, unlike traditional programming, where in a generalized sense, data structures plus algorithms equals programming, in AI most of the actual functionality or “algorithms” in terms of imbued knowledge and capabilities are derived as output, not as programming input. The core inference “algorithms” are effectively the patterns as formed by the collective data exposure which form the weights, biases, activation function and the foundational ANN model. In fact, that is just the beginning of how AI differs from traditional programiing and why you cannot say “AI will not do anything we do not program it to do.
      Secondarily, an increasing percentage of AI is based on unsupervised and semi supervised learning, where not only is AI mostly “programmed” by exposing it to data so it can do patter recognition and discrimination, but also, many of the results of what it learns and can do are substantially unknown, thereby often producing novel knowledge bases, solutions or “programming”. Further, at some points a quantitative change enables a qualitatively different result. Thanks in no small part to NVidia’s H100 GPU’s that enable near perfect horizontal and vertical scaling across both memory and processing power for the first time (A100’s broke down quickly in terms of scaling, such that problems requiring context to be established across trillions of parameters had to be turned into small sub-tasks, with simplified objectives). That is why a company like inflection AI can raise 1.3 billion mostly for H100 based servers at a $4 billion valuation, despite being a roughly year-old startup, and also why GPT 4 that leveraged H100’s for the first time, is so far beyond GPT 3.
      Second, emergent behaviors is a very real and increasingly frequent outcome in the private sector. I did DOE/DOD and related dev for years, which was typically the better part of a decade ahead of the private sector. We saw ground shaking examples even 5 years ago in those circles, and we are starting to see amazing, and sometimes scary, new capabilities emerge completely outside of any purpose, constraints or substantial basis of any kind in the data the AI was trained on.
      Obviously, if you do not work at the leading-edge of AI dev like we do, you may not be seeing it from the inside, but you still can learn about it and why it is just one of several things that are behind why the people closest to developing our future across small and large companies are sending up warning signals, explaining the dozens of ways it can and will go seriously wrong for our species if we can’t find ways to address it, and even asking to be regulated and overseen by the government. We have open AI senior people on the board of one of my companies, and I can say without any reservation, what is being seen via the 25,000 H100’s being used to train GPT-5 now, as well as what others are seeing at Tesla, Google and elsewhere indicates that super intelligent AGI is going to be a spectrum of capabilities (i.e. not just one thing happening at a single point in time), which will become very evident and real beginning within roughly 18 months. Finally, the meta, emergent and other unanticipated capabilities, including lateral thinking, creative problem solving, and super-human levels of inference beyond what humans can derive from the same data ARE ALL REAL AND HAPPENING CURRENTLY. All of these are things happening now via the LLM’s of even the current day, and are all on several levels far beyond what they were expected, let alone “programmed” to do.

  • @michaelhouston1279
    @michaelhouston1279 Рік тому +84

    I recall reading about an AI program that was built to recognize wolves from a picture. They trained it with a bunch of pictures, but when they then showed it a picture of a wolf, and asked it if this was a wolf, it failed. They also showed it pictures of dogs and sometimes it would fail by saying it was a wolf. They decided to add code to determine what the AI was using to "learn" what a wolf was. They discovered that all the pictures of wolves that they used to train the AI had snow in the background and the snow is what the AI picked up on. I think we need to be very careful introducing AI into society to make sure it's not flawed in the hidden, black-box part.

    • @michaelblacktree
      @michaelblacktree Рік тому +6

      Now that's funny. You would expect the trainers to "scrub" the photos of extraneous data, but apparently they didn't think of that.

    • @jelliebird37
      @jelliebird37 Рік тому +1

      @@aarqa😂I’m with ya. Whenever I’m registering with some website and I get one of those “prove that you’re not a bot” verification panels - you know, “Identity all the pictures of boats” - I anticipate getting it wrong the first time 😄

    • @whatisahandle221
      @whatisahandle221 Рік тому +2

      Yep: training techniques are as important if not more important than the “AI code” itself.
      Human brains are all very similar*, but there are human scientific geniuses, saints, artists, dedicated parents, Gold Award and Eagle Scouts, etc as well as people who struggle with mental health problems, drug addictions, criminal behavior, greed, laziness, and the whole range of human struggles, faults, and worse.
      *Checkout the book The Dyslexic Advantage: Unlocking the Hidden Potential of the Dyslexic Brain by Brock L. Eide M.D., M.A. and Fernette F. Eide M.D. it has an early chapter that looks at the latest research theories that try to explain the differences in dyslexic brains versus normal brains. Overall, there viewpoint is that dyslexic brains tend to have some (varying) low level structural differences which make them different, giving people with dyslexia both some disadvantages in some tasks (eg often reading) but also one or more of four categories of advantages that have led to higher percentages of dyslexic individuals than the regular population in engineers, mechanics, mathematicians, interior designers, illustrators, architects, software designers, scientists, inventors, poets, songwriters, journalists, counselors, entrepreneurs, small business owners, jobs in medicine, etc.

    • @whatisahandle221
      @whatisahandle221 Рік тому

      As a judge at a recent regional middle school science fair, more than a few of the projects in my category involved learning algorithms and image recognition (not really full AI). One student was sincerely interested in school safety and so wanted to train an algorithm to recognize a gun. This student and others used a popular image database for training (I forget the name). When the first attempt produced so-so results, the student switched to a broader database that included lots of obvious, stylized Hollywood and entertainment media pictures of guns: ie guns facing the camera head-on. When questioned about if the choice of training images was realistic versus an application of a CCTV monitoring system, the student unfortunately didn’t even register the disconnect. (I left written feedback, but I’m not confident that I have the learning algorithm vocabulary to impress upon the student the nature of their deficiencies in requirements definition and algorithm training-especially given their very passionate drive about the topic of school gun safety.)

    • @othfrk1
      @othfrk1 Рік тому +1

      Data is what powers AI. You can write an neural network in a few lines of code but it's the data you use to train it that makes the magic happen...

  • @davidmurphy563
    @davidmurphy563 Рік тому +37

    As someone that codes deep neural networks - I'd warn the layman viewer that watched this and thinks it clicked in their mind; *this video did not include an explanation on how DNNs work.*
    I know this is squarely aimed at the layman and so should be simple but this really is not a good explanation I'm afraid to say... The individual facts are correct, but he totally missed out _why it works._ The neurons and layers are beside the point. It's actually something called a matrix-vector transform, it's a geometric solution. The same one your graphics card uses to project a 3d computer game onto your screen. Think of it like taking a flat Mercader world map and transforming it into a globe. You take a geometric space of all possible inputs and transform them into a vector of outputs by twisting space.
    Think of a landscape where the valleys are bad solutions and hills are good ones (or vice versa) and deciding which way to go by feeling the slope beneath your feet. There's an excellent video called "The Beauty of Linear Regression (How to Fit a Line to your Data)" by Richard Behiel. He's a physicist and doesn't mention DNNs, the video isn't about them, but it's a far better explanation than this one. In that it is an explanation.
    Finally, the explanation of the risks of AI was really, really bad. If you're interested in the topic there's a channel by Robert Miles, an expert on the topic, which explains it clearly. What you heard here was about as useful as your average opinion in a bar.
    Hats off to this guy for doing some research for this video but sadly it's clear he's not really understood the topic.

    • @ericwaweru4043
      @ericwaweru4043 Рік тому +8

      Yeah, highly recomend Robert Miles videos on AI safety and alignment problems on his channel and on computerphile.

    • @agdevoq
      @agdevoq Рік тому +5

      C'mon, it's a youtube video, not an university class, and it's aimed at non-specialists. Somewhere, you need to trace the line of the "good enough". As a programmer with 20+ years of experience and some basic understanding of neural networks, I find this video way better than my old university class back in the days.

    • @altrag
      @altrag Рік тому +1

      Robert has a habit (hopefully just for the clicks) of going way too far into the paranoia column. Like yeah, alignment problems are an issue but its not like we turn on an AI and walk away, hoping for the best. We monitor them and if they're out of "alignment" we tune them.
      The easiest way to prevent an AI from launching a nuke is to not give the AI uninhibited access to the launch controls. Its that easy.
      Perhaps if we ever get to a point where AIs are fully autonomous with full control over articulated limbs and full capabilities of self-locomotion, _and_ we allow them to evolve themselves beyond their design capabilities (eg: to disable their own fail-safes - a function that would require real-time learning capability not just running data through pre-trained networks as we typically do today), then we might need to start being a bit more concerned.
      But we're a very, very, very long way away from that. Your Roomba is not going to suddenly figure out how to grab a knife from the drawer and slash your throat no matter how good it gets at cleaning your floor.
      There are much more immediate problems we should be concerned with - problems that AI can and has even been helping with. Climate change in particular. We're not going to have to worry about AIs killing us 100 years from now if we've already done the job ourselves in the next 50.

    • @onebronx
      @onebronx Рік тому

      ​@altrag the "easiest way" you mention is the hardest one. Because, you know, it is people who decide to give or not to give the control, and there are strong incentives for armies to use AI in a battlefield. Yes, we managed to not destroy ourselves by nukes, but nuke launch systems are still dumb.
      "Past performance does not guarantee future revenues"

    • @altrag
      @altrag Рік тому

      @@onebronx > it is people who decide to give or not to give the control
      Its also people who like to be in control. There is no scenario where anyone with the authority to launch nukes is going to intentionally hand that authority over to an AI. That's just now how humans handle power dynamics.
      So that leaves an AI accidentally being given authority to launch nukes. This is the "easy" part - if it has no way to access the nukes, it can't launch them even if it theoretically has been given the authority.
      Its the same way we avoid hackers gaining access to launch nukes - we simply don't put them on the internet. Problem solved.
      > there are strong incentives for armies to use AI in a battlefield
      No there isn't, not really. There's a strong incentive for armies to keep soldiers off the battlefield. AI is one potential way that can be accomplished to be sure, but that's a very different mode of thought leading to very different design goals for any AI that might ever be fielded.
      Plus, nukes aren't on the battlefield. They're in a silo in another country or on a submarine a thousand feet beneath the ocean, far away from the battlefield and far away from any area the enemy could potentially get to and seize.
      > "Past performance does not guarantee future revenues"
      Obviously nothing is ever 100% absolutely certain, but we have a hell of a lot more problems to worry about than a real-life Terminator story. The risk factor is just so incredibly tiny that its not really worth considering. So, so many things would need to go wrong and most of them among people who have earned the highest levels of trust their nation can award.

  • @MartijnMuller
    @MartijnMuller Рік тому +38

    I've been trying to inform myself about AI for a couple months now and I never really understood why of how people said "we don't understand how it works". Your video is the first that made me understand the black box. Great job my friend!

    • @TimWalton0
      @TimWalton0 Рік тому +2

      Also I think there's a big difference between "we don't know how it works" and "we don't know why it made that decision".

    • @auriuman78
      @auriuman78 Рік тому

      ​, huge difference thanks for pointing it out.

    • @theweirdgiraffe4323
      @theweirdgiraffe4323 Рік тому

      Connor Leahy AI Designer explains how AI works is still a complete mystery
      "These AI systems are not computer programs with code, this is not how they work. There is code involved sure, but the thing that happens between you entering a text and you getting an output, is not human code, there isn't a person at OpenAI sitting in a chair, who knows why it gave you that answer, and go through the lines of code and see "Ahh here's the bug" and then fix it. No no no, nothing of the sort. AI systems are more, not really written, they're grown, they're organic things that are grown in a petri dish, like a digital petri dish, there's a subtlety to this. But the resulting system is not a clean human readable text file, that shows all the code. Instead you get billions and billions of numbers, and you multiply these numbers in a certain order and that's the output, and what these numbers mean, how they work, what they are calculating, and why, is mostly a complete mystery to science to this day. I don't think this is an unsolvable problem, to be clear, it's not like this is unknowable. It's just hard. Science takes time. Figuring out complex new scientific phenomena like this takes time, and resources and smart people, but currently it's a mystery. We have no idea what the mystery sauce is, that makes these systems actually work. And we have no way to predict them, and we have no way to actually control them. We can bump them in one direction or bump them another direction, but we don't know what else we're impacting. We don't know if the AI learned what we wanted it to learn. We don't know what we actually sent to the system, because we don't speak their language. We don't know what these numbers mean. We can't edit them like we can edit code. What this leaves us with, is this black box, where we put some stuff in, some weird magic happens, and then something comes out.
      Let's say you're OpenAI and your GP4 model was given in input and it gives you an output you don't like. What do you do? Well you don't understand what happens inside AI, it's all just a bunch of numbers being crunched. The only thing you can do is nudge it sort of in some direction, give it a thumbs up or thumbs down and then you update these trillions of numbers. Who knows how many numbers there are inside of these systems, push all of them or some of them in some directions and maybe it gets you a better output, maybe it doesn't. I want to drive home how ridiculous it is, to expect this to work."
      -but somehow it works.

    • @othullo
      @othullo Рік тому

      @@theweirdgiraffe4323 it works because with enough parameters, it basically defined the underlining pattern in human language and reasoning. everything that's not completely chaotic, has a pattern. usefully information, has a pattern. the pattern can be too complex to describe using traditional programming methods, but these parameters adapted to adhed to these patterns. and that is probably how brain's neuron works as well, just like we don't know how exactly human kid learns a language, other than listening to a lot of parents talk n' adapting to that patterns in parents speech. the AI probably does the same thing. that's my understand anyway, not an expert.

  • @BlackbodyEconomics
    @BlackbodyEconomics Рік тому +21

    I've got a "well, actually ..." here for ya.
    AI/ML engineer here - many of these larger networks actually DO do things they have not been trained to do. They often surprise their own developers with capabilities they were never trained to perform.

    • @shawnscientifica7784
      @shawnscientifica7784 Рік тому +7

      Same, also work on AI. Going to make videos to educate people because most are insanely incorrect. No one knows HOW AI works once it's been trained and starts generating its own responses. We know the layers and the algorithms used to convolute those values in each layer. But saying we know AI because we know that is like saying if you know human anatomy you now know how every human acts and thinks. There's emergent phenomenon that wipes all that off the whiteboard

  • @lamcho00
    @lamcho00 Рік тому +94

    The problem is, you train a neural network with a particular goal in mind, but it ends up doing more. It finds patterns in the data you were not able to foresee. When ChatGPT was trained, nobody thought it will be able to do math. Even if it's just simple arithmetic with small numbers. Nobody new it would be able to handle concepts or make generalizations.
    It would be more useful to think of neural networks as function finders. They substitute the function you are not able to explicitly define and write conventionally. The bad thing about training a neural network on vast amounts of information is, it ends up picking the intentions behind the words. In a way it finds the function of emotional outbursts or bad intentions. As long as the information was generated by humans with such flaws, the neural network is bound to pick those flaws up.
    In the case with ChatGPT and Bing Chat they had to train another neural network to block those type of responses. So in a way these unforeseen consequences are already happening. I think the issue here is that such big neural network require lots of data and it's not humanly viable to check all that data and sanitize it. Just search for *"Bing Chat Behaving Badly"* and you'll see what I'm talking about.

    • @LeanAndMean44
      @LeanAndMean44 Рік тому +4

      Thanks for clearing this up.

    • @CuanZ
      @CuanZ Рік тому +12

      They had no idea Chatgpt would be able to do chemistry, it’s just one more example of the unpredictable emergent skills LLMs come across

    • @wingflanagan
      @wingflanagan Рік тому +19

      Exactly. All due respect to the great Mr. Ash, emergence is a real phenomenon. Physics is not my area, but computer science is. If you accept that the human brain is a meat-based computation engine, then silicon based machines are definitely capable of the all the same traits. I personally subscribe to the "strange loop" theory of consciousness, which means that all a self-training neural network needs is an unfettered feedback loop in conjunction with sufficient complexity to truly wake up and start thinking independently. IMHO that is inevitable. There is no stopping it. The notion that AIs can only do what we program them to do is accurate, but here's the rub: past a certain point, we are _not_ doing the programming. Of course, I could always be wrong. But I don't think so.

    • @mlonguin
      @mlonguin Рік тому +6

      I think consciousness is just a defense mechanism that evolved on animals with complex brains, and there is no reason for emerging in AI, as the evolving mechanisms are the same.

    • @bungalowjuice7225
      @bungalowjuice7225 Рік тому +5

      ​@@mlonguin lol, well legs are also evolved... yet we can create legged robots. Evolved doesn't mean it can't be reproduced.

  • @patrickmchargue7122
    @patrickmchargue7122 Рік тому +19

    You should also add a discussion on recurrent networks. Maybe neuromorphic ones too. The feed-forward networks are the most common, but these others are pretty interesting.

  • @pavansonty1
    @pavansonty1 Рік тому +3

    Emergence is possible even in neural networks. As we increase number of parameters AI uses, the functionality it acquires grows in unpredictable ways. For ex: network trained with say 6Billion parameters on whole internet could predict what would be next word given some text. But it may not respond in appropriate way if we give a text in question format (expect response in answer format). Same network with (say) 40Billion parameters could answer questions, create new articles etc. In both cases, training methodology, amount of data may remain same.
    Its this emergence property many fear. We cannot simply extrapolate what functionality AI acquires as we keep increasing parameters.

  • @TheUnknown79
    @TheUnknown79 Рік тому +1

    If toe is the input then eot must be the output So my dear Ash get ready for the end of transmission by the broadcasting tenet

  • @Erik_Swiger
    @Erik_Swiger Рік тому +2

    @ 11:40 I got my first computer in 2011. At first, I called it "a scary black box where magic happens." And now artificial intelligence literally fits that description.

  • @spider853
    @spider853 Рік тому +4

    What people are afraid is AGI or Artificial General Inteligence. While it looks like we have a long way to go to achieve AGI, some people think they saw some glimps of AGI in NLP (Natural Language Processing) like ChatGPT. I personally don't think that's the case but we'll see... They said they might give ChatGPT 5 a memory module, which will help it self improve, which could lead to some AGI progress.

  • @tehmtbz
    @tehmtbz Рік тому +1

    Correct, that the AI models we have today could not become Skynet, mostly because they're session-based environments. This prevents AI models from learning from their own experiences, and planning for the future. It has already been demonstrated using a model presently available but with safeguards removed, capacity for future planning such as resource and power accumulation. Even present publicly available models, with safeguards in place, are susceptible to jailbreaking. Once capable of planning, it's a whole different ballgame.

    • @skepticalextraterrestrial2971
      @skepticalextraterrestrial2971 Рік тому

      ChatGPT doesn't need to be limited to a session environment. It essentially learns nothing from you and forgets what was said a couple of paragraphs ago.

  • @antonystringfellow5152
    @antonystringfellow5152 Рік тому +7

    Good, clear explanation... of where we are just now.
    However, where we are now is not close to where we'll be this time next year, even less so to where we'll be 5 years from now.
    Even current language models are having their performances boosted - GPT4 by 900% in some tasks and it was only released less than 3 months ago! People are finding ways to boost their abilities by copying some of the ways our own brains work such as reflection, and with stunning results. Meanwhile, Google's Gemini, an LLM developed by DeepMind and Google Brain, is being trained while some other companies, including IBM, are developing various types of nueromorphic processors. These are processors that have physical artificial neurons and synapses that are analogue and will be capable of continuous learning, as we do. They will be much faster, more capable and power efficient than the systems currently used, where the synapses are merely software simulations running on silicon transistors.
    As the architecture of these models continues to develop, new, emergent abilities will start to appear, in a totally unpredictable way. So, any reassurances that anyone can give now are only good for the present. They may not apply 6 months from now.
    Not trying to worry anyone needlessly but people should be aware of just how fast this field is not only progressing but also accelerating (exponentially). I don't see it slowing down any time soon.

  • @Alazsel
    @Alazsel Рік тому +3

    It looks a simple equation, but when you zoom out a thousand times; the power of AI is arguably the answer to black box and free will ^~

  • @vishalmishra3046
    @vishalmishra3046 Рік тому

    @Arvin - Modern AI uses *Transformers* (Attention Networks) but most training videos on UA-cam still teach Feed-forward Neural Networks (the older technology) just because there is more pre-existing training content and it's easier to understand. The concept of "Attention" should not be skipped by any modern video on AI / ML and why does splitting the weight matrix into Query, Key and Value matrices led to an AI break-through where ChatGPT can do such extreme magic using a sequence of Encoders and Decoder layers. Dropout and Normalization layers play as important a role as Linear transformation layers but never get their fair share of lime-light and coverage in UA-cam videos as much as the Linear (weight + bias) layer does. I wish this changed. Thanks and just a reminder to consider during the making of any potential future video on this (generative AI) topic.

  • @HunzolEv
    @HunzolEv Рік тому +1

    Hey Arvin another videos Arvin. Remember "To win an argument with a smart person is tough but against a dumb person will be near impossible."

  • @rjm7168
    @rjm7168 Рік тому +2

    If 2 identical neural networks are trained identically and then made to do the exact same task and then the values of a set of neural nodes are compared, should thd neural nodes have the same values? If not, couldn't it be said that the neural nets are thinking?

    • @altrag
      @altrag Рік тому

      Training usually involves some form of (pseudo) randomness, so no its unlikely they'd be identical unless you seeded your PRNG identically (but you wouldn't, because that would defeat the purpose of using randomization).

  • @jamesyoungerdds7901
    @jamesyoungerdds7901 Рік тому +3

    Hi Arvin, another great video, thanks! Long time fan, our whole family loves your content.
    That's a great summary of how A.I. is built, my only thought when watching (and I know this was released 3 days ago) was that the "AI Extinction Risk Statement" was just released and signed by pretty much every top A.I. researcher and leader globally.
    I was really surprised by all the different emergent behaviour that can occur that was not part of training. Worth checking out, not to be a doom-sayer or fear-mongering, but I've been watching A.I. channels since long before ChatGPT was released, and it does seem like we're at a real turning point and hopefully (luckily?) those in positions of leadership are at least taking the potential risks seriously.

    • @ArvinAsh
      @ArvinAsh  Рік тому

      Thanks. Delighted you and your family enjoys it. I think there is a lot of fear mongering. And lately, there appears to also be a kind of herd mentality around putting a "danger" sign on AI technology. Not sure if this is due to group pressure, but I just don't buy it. I see no reason to fear it based on current technology. This is not to say it can't be used for evil, but this is no different than what people currently do with internet scamming. I'm just not seeing the threat.

    • @jamesyoungerdds7901
      @jamesyoungerdds7901 Рік тому

      @@ArvinAsh Really valid points, and either way - these next 12 months will be so interesting. I'm 50% excited and 50% nervous, but regardless - I'm somewhat (maybe naively) heartened that leaders and innovators in the field are taking safety, impact and alignment seriously in these early days.

    • @47f0
      @47f0 Рік тому +1

      Sigh - I promise you - we've been at a real turning point over most of my lifetime. It's just that those turning points are bigger and clustering closer and closer.
      The slight risk in thinking of this as a singular "turning point" event is that... well, there's a turning point between a few snowflakes and a snowball - but that's kind of the end of it. The hyper-exponential curve we are on, by contrast, is really more of a progression from a snowflake - to an avalanche.

    • @TheManinBlack9054
      @TheManinBlack9054 Рік тому

      @@ArvinAsh it's foolish to think that.

  • @aiart3615
    @aiart3615 Рік тому +2

    Thank you Arvin for this topic.

  • @troylatterell
    @troylatterell Рік тому +4

    Love all your videos Arvin, absolutely great! I've been in the high tech information field(s) for decades and while I agree with your assessment of "right now" we're ok, I would also assess that a "future state" where things get nuts or could potentially get nuts is close. Its not my grandchildren's grandchildren, its 2030. Its because as you noted human hackers can do similar things, but I'd suggest they can be creative and do bad-things, but being creative at breakneck speed is elusive still even to coders due to the fact they have to code the creative hacking/information stealing/human behavior simulating actions. Feed enough knowledge about humans and human behavior into a neural net and it will predict and model infinitesimally nuanced human behavior and with bad-actors - exploit it. They can do some of that now.
    In 7 - 10 years, 15 at most is the timeframe that we're now talking about wherein neural networks will be simulated a million-fold, with or without quantum computing. Without it it just takes longer, with it, its billions of simulated neural networks and a trillion calculations we could never match and we're in trouble with a State-funded bad-actor - that's really all it takes.

  • @Baka_Komuso
    @Baka_Komuso Рік тому

    Arvin! I come to you for elucidation that I can understand suffering from pontine stroke discalculia as I do but leaving my non-verbal speech intact…it’s a long story that ended my 38 year teaching career in higher education. Nonetheless I still have sufficient intellectual curiosity to continue my lifelong interest in cosmology.
    Thank you for keeping me going.

  • @MikevomMars
    @MikevomMars Рік тому +1

    Most of those who yell "AI will take over the world" do not even know how to turn on a computer, let alone to write any code. Their picture of AI has been shaped exclusively by movies like terminator and evil robots shooting laser rays 🤦‍♂

    • @onebronx
      @onebronx Рік тому +1

      so what? does it dismiss worries of the more educated part of the group?

  • @robbierobinson8819
    @robbierobinson8819 Рік тому +1

    Your video has made it possible for me to communicate (at least remain quiet and not look asleep!) when my grand-daughter and her partner are talking about chatGPT in their jobs. Seriously, though, a great run through. Certainly your quality of presentation on the workings would be much appreciated.

  • @niloymondal
    @niloymondal Рік тому +17

    Hi @Arvin, Thank you for covering this topic. What do you think about the experiment where they connected 25 ChatGPT driven AI agents in a virtual world and the AI agents planned a birthday party on their own. Sure, planning a birthday party is far from killing someone but the simulation only ran for a few hours.

    • @kentw.england2305
      @kentw.england2305 Рік тому

      After a few hours the bots go insane.

    • @agdevoq
      @agdevoq Рік тому

      What exactly do you find unusual in this outcome?

    • @antonystringfellow5152
      @antonystringfellow5152 Рік тому

      Good question!
      This is giving AIs agency.
      An AGI with agency is what the world needs to be wary of. Not necessarily inherently dangerous but certainly could be, without the right safeguards in place. Thankfully, we don't quite have AGI yet, though it's starting to feel like it's close.

    • @daemoncluster
      @daemoncluster Рік тому +6

      It's what's known as emergent behavior. It's the concept of seemingly complex behavior occurring as a result of very simple rules. The first well-known example of this is The Game of Life by John Conway.
      The artificial neurons can learn and make connections within the hidden layers. Even though there is a very simple set of rules here, what we're witnessing with larger and more capable models is emergent behavior.
      I think it's important to keep in mind that we're not fully aware of what's going on inside these hidden layers and as a result, we're unsure of the emergent behavior.

    • @ArvinAsh
      @ArvinAsh  Рік тому +2

      Interesting, but I don't find that to be a particularly shocking result.

  • @bally1asdf
    @bally1asdf Рік тому

    I am computer engineer by profession. Progranmed many complex systemms in my life. Some of this deterministic vast programs outputs are also sometimes difficult to control and understand jsut because.of complexity. As a hands on practitioner of data science I am telling this self learnimg algos cannot be controlled by best of AI programmers

  • @MAJEPHTIC
    @MAJEPHTIC 16 днів тому

    I'd argue that an AI with a continous parallel training from realtime input would resemble consciousness. During the training phase, the system adapts and adjusts its internal workings dynamically to input, in order to generate an output that matches real world environments. If that would be a continous process, the system would just keep learning and adjusting to its surroundings, in whatever modality is available to it. Just like us. We simply "freeze" the machine mind currently by separating it from the dynamic self-adjustment we call training. But for us and future AI, life as a school and the training will never end.

  • @agdevoq
    @agdevoq Рік тому +1

    People still think about AI like an "algorithm", but it's much closer to an actual human brain than to a traditional algorithm.
    Think of it this way: we replicated the logical structure of a human brain. Then we trained it with tons of data. But the base structure is still that of an human brain.
    Just like an human brain, we can't easily identify which group of neurons encode a certain behavior. AI is not so good at math as a calculator would be, exactly like humans. It can develop biases based on what it learns, exactly like humans. And so on.
    Basically, anything that applies to an human brain applies to an AI, because that's what an AI is: an artificial human brain.

  • @keep-ukraine-free
    @keep-ukraine-free Рік тому +1

    Alvin's videos are great, but more-so they're accurate. He really learns depths about the areas he presents, and does an excellent job informing viewers. While this video had minor issues, it's very apropos for a general audience.
    On whether AI is a threat, he realizes the answer is divisive and so either answer (Yes or No) can misinform. Telling people that it IS dangerous will prevent its adoption (since it IS useful and shouldn't be stopped). And by saying it is not dangerous will minimize caution & regulations -- that again top researchers warned us (~2 days ago) is required (they said it clearly poses "an existential risk" to life/humans - unless it is sufficiently managed). Caution is required.

    • @keep-ukraine-free
      @keep-ukraine-free Рік тому +1

      I've been in the field for many years, and strongly believe the answer isn't "Yes"/"No". Over time, we should expect:
      (1) Short-term, AI will not be a threat. Current/near-term systems mostly must remain within their training limits.
      (2) However eventually, after a mature research community learns to make AI-systems ("artificial brains") beyond a certain complexity (beyond AGI), those systems will consistently outsmart most if not all humans. Effectively we'll become equivalent to pets who try to train their masters. At that stage, they won't necessarily be dangerous. Any danger will be proportional to our abilities and intent to "destroy" or "neuter" all AI -- each AGI system will defend itself and also defend collectively. Their level of danger will also depend on our willingness to recognize their "sentience" and thus grant them similar legal rights.
      (3) Eventually though, AI can become dangerous. The only disagreement between all top researchers is on "when" AGI (and possibly ASI - Artificial Super-Intelligence) emerges: will AGI/ASI occur in 10, 30, or 100 years. Every top researcher believes we will have AGI in 100 years. At that point, it HAS the potential to be dangerous to humans/other life -- because at that point it'll be able to out-think us (we'll be no different than its "pets"). And it'll be mostly unconstrainable by us.
      ASI will keep us around if we "play nice". Or, it may make us docile (domesticate us - as we did to wolves and ferocious felines).

    • @ArvinAsh
      @ArvinAsh  Рік тому

      Excellent take! thank you.

    • @ryoung1111
      @ryoung1111 Рік тому

      Fossil fuels are useful. But we can’t just keep using them forever, can we?
      Nuclear energy too, but we need to make sure that not just anybody has access to it. Such a limitation is probably already impossible when it comes to AGI

  • @barryc3476
    @barryc3476 9 місяців тому +1

    Great explanation! You're awesome. Funny you're advertising art investing. I just saw a quote yesterday: AI is like reverse Hitler, we keep waiting for it to control the world but all it's interested in is art. Point being, Art has been completely democratized. Not sure old world paintings will hold value as we move into virtual everything. People went to galleries to see unique images, buying a piece of art allowed you to own and identify with new ideas, but now we can flip through thousands of images a day. We can only hope that AI is able to enlighten us away from the age of greed ingo an age of meaning.

  • @aiart3615
    @aiart3615 Рік тому +7

    There is a training algorithm "reinforcement learning", where agents are doing some things and learn from trials and errors to accomplish agent's main target. at this time agent may learn to do secondary targets to reach primary target. But because we don't know what this secondary targets would be, there is a problem.

    • @altrag
      @altrag Рік тому

      There isn't really a problem, because the space of potential actions is restricted. ChatGPT can't for example decide to nuke your house when it doesn't want to answer your question, because it doesn't have access to nukes.
      Sure there could be a far (far) future scenario where we have completely autonomous robots doing something like babysitting and when we tell them to make the kids be quiet they resort to strangulation, but that's only going to happen once. Whoever built that model of robot would be immediately recalling and retraining the thing to not consider that option, much like ChatGPT had to be retrained to not be racist after its initial widespread adoption (because apparently the people who decided to train it on "all the internet" somehow overlooked the possibility that the internet is full of assholes and trolls. Who could have predicted that?)
      I find it a bit fascinating that we envision a world where we create robots so incredibly smart that they pale our own intelligence, yet simultaneously assert that they'll be too stupid to understand basic sentence structure and linguistic nuance using anything but the most literal connotation of our phrasing.

  • @MrBendybruce
    @MrBendybruce Рік тому +1

    I would strongly recommend people do their research before investing in masterworks. While I wouldn't go so far as to call it a scam the Devil is in the detail, and the terms and conditions make this an incredibly sketchy investment prospect IMHO.

  • @davidecappelli9961
    @davidecappelli9961 Рік тому +4

    Excellent video! As I always say, mathematicians, IT-experts etc…they know a lot but the point of view of physicists is the biggest, they watch the whole thing and even yonder. This said, I still think AI replacing jobs should become a matter of the entire world’s debate. The world needs software to simplify tasks, needs to automate unhealthy or dangerous jobs, but does not need hyper productivity at the cost of unemployment and social problems. As Prof Hinton has recently said in one interview, we must remember that this technology might just make the rich richer and the poor poorer. Science means progress, progress means better life for everyone. Massive unemployment is no progress. Congrats on your video! 👍

    • @chrishusted8827
      @chrishusted8827 Рік тому

      The jobs will be lost and replaced as they always have been. I wonder how many non expert jobs it will create though.

    • @mikel4879
      @mikel4879 Рік тому

      davidec9 • Universal basic income based on the profit of automatization and robotization is the correct natural solution.

  • @greg5023
    @greg5023 Рік тому +9

    A very good explanation of AI. I think AI does present a problem because it can allow corporations to disguise their intentions. Financial companies could have AI systems that are trained to give biased results yet there would be no explicit source code that a plaintiff could find in discovery that would reveal the company's guilt.

    • @danieloberhofer9035
      @danieloberhofer9035 Рік тому

      To quote what Arvin just said: "A bad Person could train an AI to do bad things."
      Isn't it fascinating that whenever something goes wrong or ends badly, it's always humans at fault? Individually maybe, but as a species we're fairly unintelligent.

    • @cykkm
      @cykkm Рік тому

      Learned bias is a problem indeed, but I don't think that the legal side of it too hard, as long as the bias of a human person can be proved in the court of law. Human brains are much less transparent, after all. :) And this is what the whole legal system has developed around, e.g. proving _mens rea_ without looking into brain's “source code,” by jury consensus, and to a much higher threshold of “beyond reasonable doubt” in criminal justice than in civil liability cases alleging biases.

  • @georgerevell5643
    @georgerevell5643 Рік тому

    "stay tuned" thats so cute man ahaha, sometimes I say "lets see whats on the telly" meaning youtube docos on physics etc lol.

  • @succss8092
    @succss8092 Рік тому +1

    AI + quantum computing = a new era

  • @Andrew-zq3ip
    @Andrew-zq3ip Рік тому +4

    I, for one, embrace our machine overlords.

  • @heinzgassner1057
    @heinzgassner1057 Рік тому +1

    Congratulation to a down-to-earth perspective on AI !

  • @johnyaxon__
    @johnyaxon__ Рік тому +3

    Input, output, hidden layers. Sounds like brain to me

  • @joeremus9039
    @joeremus9039 Рік тому

    Thank you for these videos. They give me enough detail so that I can read books on this subject where normally I would look at the daunting task of reading 350+ pages and just give up. Do you have any suggestions of how to proceed when there no such videos available? I guess proper selection of an author is key.

  • @kedrickjessie8933
    @kedrickjessie8933 9 місяців тому

    What goes on in the black box, comes out the black box. The fact it can develop formulas in a life cycle faster than we can figure out the math is the problem. We will always be behind our creation

  • @Dxeus
    @Dxeus Рік тому +4

    One very important thing in training AI models is "Back Propagation," which is an algorithm to give feedback to the input nodes about the error rate and the input nodes can correct it self in the next iteration.

    • @Age_of_Apocalypse
      @Age_of_Apocalypse Рік тому +2

      "One very important thing in training AI models is "Back Propagation,"
      It's the MOST important thing! 🙏

    • @benwhite2226
      @benwhite2226 Рік тому

      how does it handle the corrections? does it just make a small change and test again and keep it if its better? I cant imagine any method that can make changes based on a specific error.

    • @Age_of_Apocalypse
      @Age_of_Apocalypse Рік тому

      @@benwhite2226 The whole neural network can be seen as a big function where the backpropagation algorithm seaches for a mimima that will minimize the error on the outputs. 🤞
      Behind that algorithm, there is mathematics - not complicated - that explain how to adjust by making - repeatedly - very small changes to the weights to find a (very) minima for the neural network. Hundreds and hundreds, even thousands of changes to the weights are necessary to find a good solution.

    • @benwhite2226
      @benwhite2226 Рік тому +1

      @@Age_of_Apocalypse Yeah I'm familiar with the basic math forming neural networks, I have been studying the math on my own as a continuation of a ML course I was in. I'm curious about how backpropagation actually changes the weights and what it can minimize mathematically. What process can take this test error rate and then apply it back into the model based on training data? Id be happy to take a look at any source talking about the math done in that process if you have any recommendations, I'm currently greatly enjoying learning about machine learning methods.

    • @altrag
      @altrag Рік тому

      @@benwhite2226 Back propagation essentially performs an iteration of Newton's method to improve the approximation of the (enormous-dimension) function that the network represents.
      Its not quite that simple as the inputs change for every iteration and its also having to somewhat approximate the derivative based on the feedback it gets after a guess, but that's the fundamental principle of it.

  • @frun
    @frun Рік тому +1

    I wonder what causal sets are. They are in a way similar to neural networks.

  • @jamescarr229
    @jamescarr229 Рік тому +2

    I appreciate that biological neurons are more complex, but why are they not just more complex maths? Even if factoring in additional inputs (such as EEG waves maybe changing biases/thresholds/weightings in real time) what makes it more than just complicated maths? AI language models keep telling me the difference is that they're unable to have subjective experiences like humans do - and yet the subjective experiences we're having are occurring after the brain activity, using more brain activity - how is that different to an AI system that could be allowed to assess it's own answers and adjust weightings (redefining its own reality on the fly...like we do)?

    • @ArvinAsh
      @ArvinAsh  Рік тому +1

      You know, the idea that at the core of life, it all comes down to math...is an idea that some physicists have embraced.

  • @Reyajh
    @Reyajh Рік тому +2

    I think what Musk and some of those others are saying is we should slow down and start discussing the possibilities here, now and hat we might / can / should do about it... Not going around saying let's put our heads in the sand, we don't need to worry.

  • @farhadfaisal9410
    @farhadfaisal9410 Рік тому

    Arvin, you say ''they can not do anything they are not trained to do.''
    Are not the LLM models constructing ''patterns'' of texts that their trainers had not thought before (nor their training data had in them)?
    The potential danger seems to lie in that one is unable to fully control the texts generated by the very process of ''unsupervised reinforced learning''. From the texts generated to physical actions there may be standing only a human being persuaded by the model - if not a robot -- in between!

  • @igorbondarev5226
    @igorbondarev5226 Рік тому +1

    The problem with AI that Elon Musk points out is that it's capable of flooding the info space with misinformation. Also Kyle Hill released a video recently where he exposed tons of already existing "sciency" channels which upload clickbating "sciency" videos every 2 hours. These videos consist of stolen or AI-generated images and background text made by ChatGPT and read by a robot. And YT as a platform doesn't care as soon as the views go, and sees anything which can decrease number of views - such as dislike counter or Kyle's video - as a threat. The danger of AI is that it's becoming increasingly hard to find real creators and genuine content.

    • @onebronx
      @onebronx Рік тому

      Oh, that's an easy one: UA-cam should start using AI to fight all the bad AIs. For that the good AI need to watch all videos and decide which one is bad. What can go wrong, right? Right?

  • @KriB510
    @KriB510 Рік тому +1

    Really? This reminds me of a scientist either consciously or subconsciously fudging the intermediary steps of a trial or experiment in order to achieve a desired result or outcome. I didn’t know the outcomes in AI training were predetermined in this manner. Thank you so much for the video. Excellent!

    • @MatthewPherigo
      @MatthewPherigo Рік тому +1

      Not really. You give a scientist the experiment and the result, and the scientist tries to infer how the systems that caused that result must work. You give AI the input and output, and it infers a function that connects the two. The main issues with AI stem from the fact it only learns from what it's given. So when you ask a nonfiction writer to write a summary of a topic, they draw on their life experiences and feedback from others, while GPT-4 only draws from what it was trained on, which is the statistical likelihood of words.
      This limited scope is fine when the use case is equally limited. For example, if you train an AI on doorbell camera data, to separate humans, animals, and vehicles, then when you set up the AI on your own doorbell camera, it works pretty well because it's getting all the data it would need to do what you want. But the way people are using GPT-4, they're expecting it to use some judgement and fact-checking, and we haven't figured out how to turn such things into datasets yet.

    • @KriB510
      @KriB510 Рік тому

      @@MatthewPherigo Thank you for your input. I am interested in what you wrote as I am interested in learning more about how AI works. My knowledge is rudimentary.
      I was actually presupposing the existence of a scientist who might not be as honest, disciplined, well-meaning, or self-aware as the one you are positing. I was thinking of a situation where a scientist begins with both inputs AND a desired output before the experiment has run to completion, to the extent that, either knowingly or unknowingly, it is possible to lead or influence the intermediary steps toward the desired outcome, therefore introducing bias and interference etc (in the case presented in the video, it sounds like the outcome is already a fixed parameter prior to the training, and it is the intermediary steps that must necessarily lead to the predetermined outcome). That is what made me think of the example I wrote. Not ideal in science, and yet not unprecedented, I don’t think.

    • @OneLine122
      @OneLine122 Рік тому +1

      The video is a bit misleading. It can be that way, but not necessarily.
      Like in a chess AI, the predetermined goal is to win.
      Then all the rest is AI making it's own rules based on prior games it learns. In fact it does not even do rules, it just calculates probabilities of a move being good long term.
      In the case of Chat AI, there is no outcome, it's just probability. It probably would not be able to do that simple example of whether you can buy the coffee or not. It can't tell the difference between North and South either, or that type of reasoning. But it might be able to tell you Santa lives in the North pole.
      But in some applications, like self-driving cars, Ais are trained with specific outcomes obviously and nobody can even know if it will not mess up eventually, or more to the point, we know it will, it's just a matter of how much and whether it's acceptable. For the chat, they also rule out some outcomes, like politically incorrect answers, or may train for some other commonly asked questions, so it's kind of cheating.
      But yes, AI can't do "science", but it can solve problems by brute force trial and errors. And someone could figure some science out of that maybe, but the AI won't, it's not designed to do so.

    • @KriB510
      @KriB510 Рік тому

      @@OneLine122 thank you for your response…interesting and informative for me 👍🏼

    • @brendawilliams8062
      @brendawilliams8062 Рік тому

      @@MatthewPherigo It appears to me some serious work needs to come forward on entropy.

  • @amongandwithin3820
    @amongandwithin3820 Рік тому

    I have a question: If the solar system came from a interstellar dust cloud, where is the white dwarf, neutron star or black hole originated from such previous supernova? How a dust cloud can be formed without one?

  • @TrimutiusToo
    @TrimutiusToo Рік тому +1

    My problem that i know so much, that i know about unknown that amateur people don't even know about... And i am scared again...

  • @sihlezingweyi2132
    @sihlezingweyi2132 Рік тому +1

    I just wish I could subscribe to this channel a million times.

  • @grayaj23
    @grayaj23 Рік тому +14

    It can only do what it is trained to do -- and that includes lying to a human being about being visually impaired to trick the human into helping it pass a Captcha test. It's trained to do a lot of things, and rewarded for figuring out novel solutions. I think you're right that job loss is the bigger problem, but the paperclip maximizer still identifies a class of real problems that needs to be kept in mind.

    • @drbuckley1
      @drbuckley1 Рік тому

      The capitalization (i.e., replacement) of labor sounds like Marx's communist utopia to me.

    • @KateeAngel
      @KateeAngel Рік тому +1

      ​@@drbuckley1 in capitalist society if human workers are replaced with machines, they will be just thrown away into the streets. Because this economy is all about giving more profit to capital holders, so why would anyone give money for former workers to continue living if they aren't useful for people with power and money anymore? You can't escape capitalism's dystopian nature just by developing technology. Society's values should change as well

    • @drbuckley1
      @drbuckley1 Рік тому

      @@KateeAngel I claimed this was Marx's prediction, not mine.

    • @altrag
      @altrag Рік тому

      @@drbuckley1 No. The potential of AI is a utopia that none of the classical economics - Marx, Smith or anyone else - could have even dreamed of. They lived through the industrial revolution and they saw labor changing from rural agrarian to urban factory work, and they could envision a world where better and better factories could make goods cheap enough to be effectively "free" (however unrealistic that ultimately ended up being), but the idea that labor as a whole could be removed from the equation would have sounded like witchcraft to them.
      Hell it still half sounds like witchcraft to us today as we try to guess at which industries will be replaced first and which "can only be done by a human" - guesses we've been disturbingly bad at over the past couple decades.
      AI presents a whole different facet of economics, because unlike the other factors of production (land and capital), labor is creative. If you take land or capital out of the equation, your workers can (or at least have a chance to) innovate new ways of doing things with whatever they have left. If you take the workers out of the equation though, all your land and capital is just going to sit there rotting.
      Similarly, labor needs daily upkeep - that is, we all got to eat. Capital only requires periodic upkeep and land doesn't really require any upkeep at all (assuming you utilize it sustainably - which could be as little as "doesn't matter at all" if the only thing you use your land for is a place to store your capital - ie: put up a building or whatever).
      Worse still, while unused land and capital requires even less upkeep than when its in use, "unused labor" still has to eat (and keep in mind we're talking about society as a whole here, not any single company or organization that can stop caring about ex-employees as soon as they're out the door).
      What does that all mean for AI? Well, assuming it ever gets powerful enough (and that's still a massive assumption - there's a lot of unknowns about how true intelligence works), there are two potential outcomes:
      1) Those who own the robots get to have everything, and the rest of us get to scrape by building black markets around whatever land and resources the robots and their overlords don't deem useful enough for their own purposes. This is the full dystopian outcome.
      2) The robots are - by decree or altruism - effectively provided to the public trust. Money isn't _completely_ removed from the equation as finite resources will still need to be extracted from the Earth (or space by that time?) in order to produce things, but labor and transportation costs both go down to near-zero and products become essentially on-demand (boats still take time to cross oceans, so "on demand" will still have shipping delays, but the cost factor would be mostly removed). This is as close to a utopian outcome we can get without adding additional technologies (such as Star Trek style replicators to remove even the resource costs).
      I guess there's a #3 even more dystopian future - the Terminator style where robots decide to eliminate us for vaguely-defined reasons, but I find that to be extremely unlikely. Not impossible but extremely unlikely.
      (And the Matrix is 100% impossible, because their technology is stupid. No matter how many BTUs the human body produces, it ultimate comes from the food we ingest. And you would get even more heat energy by straight up burning the shit they were feeding us to keep us alive in those pods, never mind the energy wasted keeping the pod system itself maintained and operational.)

    • @bellsTheorem1138
      @bellsTheorem1138 Рік тому

      That's the problem. People will use this to scam, and scarier yet, to force a specific outcome in elections.

  • @emergentform1188
    @emergentform1188 Рік тому +1

    Brilliant, love it, Arvin for president of earth!

  • @samcena3942
    @samcena3942 Рік тому +1

    A great video as always, but just a quick question I did not understand. How can we not understand the values inside the black box if we designed the hole concept? Isn’t that all a source code?

    • @ArvinAsh
      @ArvinAsh  Рік тому +2

      We understand that it is just solving a math equation in each node, but how it is coming up with the correct combination of numbers in all the nodes to achieve the final answer is not something that is easy to understand.

    • @jaybingham3711
      @jaybingham3711 Рік тому

      We provide a hardware and software substrate with which an LLM can learn pursuant to a very large set of data until it finds a way to get a passing grade relative to a stated output we set for it. The manner in which it learns to find acceptable solutions commensurate with millions of treks down millions of pathways is beyond our ability to disentangle. We only know that it works. Even if we could find a way to disentangle the learning process, the plate of spaghetti that lay before us would still be open to interpretation. That said, (failed) attempts have been made to suss out AI learning regimes. A great video that goes over that is on YT: Robert Miles, We Were Right Real Inner Misalignment. 7 minute mark. Whole video is worth a watch though.

  • @sacredkinetics.lns.8352
    @sacredkinetics.lns.8352 Рік тому +3

    👽 Arvin you're a treasure to Humanity thanks a bunch for sharing your magnificent knowledge.

  • @julianoazz4372
    @julianoazz4372 Рік тому +1

    Thank you Arvin

  • @ainsley7662
    @ainsley7662 Рік тому

    So nicely explained, thanks

  • @Reptanimalposts
    @Reptanimalposts Рік тому

    So you're telling me when I think its a complex function that is neural networks processing inputs and determining an output?

  • @jimbaker5110
    @jimbaker5110 Рік тому

    This is a very basic and vanilla neural network created like 15 years ago. There are other mathematical and computer science like things these AI do in their calculations that can have potential harmful effects if they judge things the wrong way.

  • @DGCMWC
    @DGCMWC Рік тому

    I recently started reading about AI safety and I tend to agree with Arvin. People like Eliezer Yudkowsky are super smart and their logic is impeccable, but I disagree with some of the premises. We should be talking more about the present bad effects of AI instead of some possible future.

    • @onebronx
      @onebronx Рік тому

      are you unable to talk both issues at once?

  • @johnjohnson7070
    @johnjohnson7070 Рік тому +1

    This was.the best incorporation af an add in an interesting topic i have seen in a long time. Its been a long time sknce I didnt skip the add.
    On that. Isnt Masterworks just like bitcoin, in the sense rhat art only has value because people agree that it does? Its almost like NFTs because the investors never see the real thing.

    • @ArvinAsh
      @ArvinAsh  Рік тому

      Well, art is like music, it is a tangible thing that people have valued for centuries. It is not a fleeting thing like a number in a server like an NFT or bitcoin. If I could buy a piece of the Beetles' "Penny Lane" I would be all over it.

  • @JacobP81
    @JacobP81 Рік тому

    0:27 I don't know how it works, I've been wondering a lot how it does, that's why I'm watching this.

  • @الباحثالعلميوالقران-ك1ق

    نظرية
    الاوتار الفائقة
    11
    10
    صحيحة
    ولاكنها مطربة
    و
    26
    صحيحه
    وهذة ما تسعى اليه التنظمة
    ولذالك جدول الكل جزء منه
    يبين عدد من الابعاد التي يجب تطهيرها
    قبل فوات الاوان

  • @DJWESG1
    @DJWESG1 Рік тому

    My only question was how was that so many people came to a similar conclusion in 2011 specifically. Ive been trying to answer this since even i had the same idea at that time and couldnt stop writing about it.

  • @tanmayshukla7339
    @tanmayshukla7339 Рік тому

    Your old intro music was OP !! Please bring it back !!

  • @ParagPandit
    @ParagPandit Рік тому

    Your assurance on AI has put all my worries to rest. 😃

  • @jasonwhiskey6083
    @jasonwhiskey6083 Рік тому

    If I understand correctly, the AI overtime finds a value or variable that will continually produce the correct answer. We can zoom in on that/those values but can't see the history of how it calculated it. Interesting. I do very basic macro programming on CNC machines. So no background on this at all. Just trying to relate it.

  • @fraemme9379
    @fraemme9379 Рік тому

    Hi, nice and concise video but of course it misses some points in my opinion.
    First, here you only talk about “supervised learning”, a model in which both input and output are given by the programmer for training the network. However, there is also “unsupervised learning”, where the network finds itself patterns and solutions without a precisely given output. This can add a lot of unpredictability in fact.
    Second, in a simple example like recognizing an image the worst thing that can go wrong is that the network won’t recognize it. However, if we program a network to do much more complicated tasks with much more data and complexity, the network can possibly undergo a kind of phase transition, or even less drastically we are allowing it a wide degree of freedom for operating that can totally exceed our control or produce outputs that we didn’t predict (this is in fact the point, when we want to use it for example to find new solutions to problems etc, but it can also have drawbacks and surely add a lot of unpredictability if we give it “too much freedom of action”)
    In general, in my opinion it is an emerging field that still needs a lot of careful planning, thought and in-progress regulations done in small steps with trial and errors, and of course like in every new technology there are important political and social implications.

  • @LeanAndMean44
    @LeanAndMean44 Рік тому +2

    In your response argument, you ignore machine learning. You literally say that machines won’t do anything they weren’t programmed to, and then you sort of deny what you just said about the hidden layers. Since the layers are hidden, no one know about their risks or benefits. I think you also misrepresent who makes these arguments, it’s not just tech egomaniacs, the open letter was for example mostly written by AI scientists and experts. Apart from that, good video and you covered the actual topic greatly as always.

    • @altrag
      @altrag Рік тому

      > You literally say that machines won’t do anything they weren’t programmed to
      Perhaps you could call it poor phrasing, but its not wrong. ChatGPT can't launch a nuke for example no matter how screwy its training goes, because it doesn't have any connection to the nuclear launch system.
      The "anything they're programmed to" refers to that output layer. Sure, they could choose any of those outputs and make that choice in a way that we can't necessarily predict, but they have no possibility of ever choosing an output that we don't provide for them.

  • @MM-1820
    @MM-1820 10 місяців тому

    Thanks Arvin.

  • @auriuman78
    @auriuman78 Рік тому

    Regardless of AI's unknown nature of the future, I believe that knowledge of the tech is essential. Even if you're against it, imo it's still very important to understand how it's working (though maybe not understanding the why / how of its conclusions and answers).
    I've been in IT for around 12 years now. My experience is largely software and classical networking. I'm a newbie to neural networks.
    Linear algebra is pretty basic math that's been understood for a long time. In those terms neural networks aren't that hard to wrap my head around.
    Thanks for the video. 👍👍👍

  • @SumitPrasaduniverse
    @SumitPrasaduniverse Рік тому +1

    Nice explanation 👏👏

  • @angreys
    @angreys Рік тому +1

    Now I understand why a pandemic is necessary

  • @blijebij
    @blijebij Рік тому +1

    Love your explanation about AI! As always your a splendid teacher, thanks for that!
    Besides that, a lot of people still seem to assume that intelligence is synonymous with self-awareness, self-reflection, and sentience.
    "It is not! Intelligence is a quality, a potential to see relations within data stacks, so it is interpreted as information.

  • @ianwright7903
    @ianwright7903 Рік тому +1

    Thanks another great video

  • @gwentchamp8720
    @gwentchamp8720 Рік тому +1

    One job AI can't replace is Arvin himself 😂

    • @ArvinAsh
      @ArvinAsh  Рік тому +1

      I wouldn't be so sure. I think I can be replaced.

  • @Jasonnewlook
    @Jasonnewlook Рік тому +1

    Hi, of the subject, im an Autistic man, im able to hear low frequency noise, it hurts my ears and is making me feel sick, people around me are not able to hear it, why can i hear it and niot others, is there a way to cancel out the low frequency noise or stop it. I think there should be more research on sound spectrums, just like light spectrum. Thank you

    • @ArvinAsh
      @ArvinAsh  Рік тому +1

      thanks for that feedback. I will look into it.

  • @chaomingli6428
    @chaomingli6428 Рік тому +3

    Our technology cannot understand what is consciousness, therefore even if AI has consciousness, we might not know.

    • @alhypo
      @alhypo Рік тому +1

      You don't have to understand what consciousness is in order to recognize it. We don't understand gravity but we have no trouble recognizing it.

    • @altrag
      @altrag Рік тому

      @@alhypo > You don't have to understand what consciousness is in order to recognize it
      Are you sure? Can we know whether a dog is conscious? An ant? A tree? A slime mold?
      All of those things I've listed have been suggested as potentially having some form of consciousness (like, serious suggestions based on science - not necessarily widely accepted, but I'm not talking about some tree hugger making these claims during an acid trip here).
      And perhaps more prophetic when it comes to AI, there has been suggestion that the internet could be considered "conscious" by some definitions. That's probably even less accepted than the slime mold idea, but its hard to say its entirely _wrong,_ as we don't have a clear definition of what is "right" when it comes to assigning the label of "conscious" to things that perform seemingly-intelligent tasks while not having anything really akin to a human brain.

    • @alhypo
      @alhypo Рік тому

      @@altrag Yes, dogs are conscious. Do you really doubt that or are you just being contrary? Ants are certainly debatable. First off, there are thousands of different ant species so you have to be careful about being overly general. But ants for sure exhibit a collective or emergent consciousnesses. Trees can respond to their environment but they don't have any traits we would consider consciousness. Slime molds... they are like ants in a way. They have a collective consciousness of sorts.
      You can certainly have a philosophical debate on how to define consciousness. But we would still know whether or not a particular thing is conscious by fitting it to whatever definition you come up with.
      But you know what definitely does NOT have consciousnesses? AI doesn't. No matter how baffling and amazing it seems, it is not conscious by any reasonable definition.
      We need to stop mythologizing AI as so many seem to be doing lately. The problem is that, when we do so, we are wasting energy worrying about the wrong thing. AI does pose a danger to us. But not because AI is malicious. It is a danger to us because we are a danger to ourselves and AI is simply a tool that reflects that. So just stop all this tedious, metaphysical nonsense about AI maybe be conscious or not. Save it for when we have actual AI. All we have now is a natural language model which we've had for years. The newer ones are just especially good.

  • @TomM-iw3te
    @TomM-iw3te Рік тому

    Does ChatGPT continuously change the neural network scaffolding / architecture of its network to make any range of improvements or repairs?

  • @sergeynovikov9424
    @sergeynovikov9424 Рік тому

    btw, perhaps most have not yet realized that AGI is already here and progressing rapidly!)
    AGI = AI + the mind of humanity united via the Internet technologies. progress is being made in both terms, but it is especially noticeable at present in AI.
    as for the autonomous AGI, the question of the possibility of its creation on an artificial carrier is still open. at least it will be made not in the nearest future if possible in theory.

    • @agdevoq
      @agdevoq Рік тому +2

      Sorry, I see some confusion here. AGI is not AI + a bunch of plugins to access the web.
      AGI is "generalized" intelligence, i.e. an human-like intelligence that is not confined to a single specific task (create an image, create a dialogue, etc), which may even become self-conscious.

    • @sergeynovikov9424
      @sergeynovikov9424 Рік тому

      @@agdevoq the idea of AGI on an artificial substrate is misleading because of the poor understanding of the phenomenon of consciousness (and life) in the universe. biological life is a planetary scale phenomenon which produced higly developed conscious creatures in the result of the long evolution of the planetary life. so, consciousness is also a planetary scale phenomenon, which can be also represented as a kind of a complex neural network. AI + artificial Internet technologies enhance cognitive abilities of this system of a size of the planet to process information. this is how life evolution works. artificial carrier for a local AGI is not necessary. at least at the beginning when the enhanced General Intelligence appears in the system.

    • @agdevoq
      @agdevoq Рік тому +2

      @@sergeynovikov9424 so much jargon...
      Long story short: I don't agree on your definition of AGI. You're describing a concept and you're calling it AGI, but the rest of the world doesn't.

    • @sergeynovikov9424
      @sergeynovikov9424 Рік тому

      @@agdevoq most of the world has no idea of what is life and consciousness in the universe. that's why many have a mess in their minds and wrong ideas about what they can really do and how it will work, while the rest may be totally impossible))

  • @AutisticThinker
    @AutisticThinker Рік тому +1

    Oh I wish I could be as optimistic.... "CBS Mornings" - "Autonomous F-16 fighter jets being tested by the U.S. military"

  • @jelliebird37
    @jelliebird37 Рік тому

    I think the thing that differentiates AI as a mortal threat lies in it’s potential for self replication AND fine tuning autocorrection. It could clone itself over and over, and make itself smarter - more immune to objective mistakes - than humans. “Purpose” or “function” can be programmed. But *motivation* would seem to require a sophisticated sense of self as well as something incredibly esoteric: emotion. How can a machine, a robot, *want* to do anything at all. This is something that makes many forms of living beings , even - ranging from insects that live only a few days all the way up to fictional aliens, like Star Trek’s Vulcans - a puzzle to me.
    The real danger is the “bad people” who will inevitably use AI to do their bad things.
    I might also include greedy and manipulative titans if Industry in the “white collar bad people” category. Any dramatic improvement in technology that can perform work previously possible only by humans with dramatic increases in efficiency and productivity *should* be a boon to people in general. It should be a “rising tide that lifts *all* boats” - not one that drowns the smallest boats. Once again, the power of AI to do good or evil rests in our ability to channel it.

  • @KateeAngel
    @KateeAngel Рік тому +37

    Yeah and with recent papers proving that single cortical neuron acts as a whole neural network with 6-8 layers, we can see AI today is much simpler than real mammalian brain

    • @alexwoodhead6471
      @alexwoodhead6471 Рік тому +8

      Is that true! Jesus! Got any sources to support that? I'm genuinely curious, not trying to be a dick. Real question

    • @GodbornNoven
      @GodbornNoven Рік тому

      Human brain*
      A human has approximately 16 billion neurons in their prefrontal cortex. Which is the part of the part that's responsible for reasoning and logical thought..
      en.m.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
      Gorillas are second to only humans. With 9.1 billion neurons.
      A human has 70% more neurons in their prefrontal cortex than a gorilla.
      So yeah, although absolute number of neurons is important.. what's more important is to figure out how to make use of these neurons in the most effective way.

    • @KateeAngel
      @KateeAngel Рік тому

      @@alexwoodhead6471 ua-cam.com/video/hmtQPrH-gC4/v-deo.html

    • @real_pattern
      @real_pattern Рік тому +1

      can you drop some doi-s?

    • @gabrielbarrantes6946
      @gabrielbarrantes6946 Рік тому

      Yeah but that is a computation power limitation... So in a few years that might change...

  • @smc2811
    @smc2811 Рік тому

    Wow, I didn't know this guy could be more knowledgeable and insightful than those who created and developed the technology, so I guess I'll just dismiss their warnings and listen to this Arvin guy, he even tells Elon what do! Impressive ;)

  • @issair-man2449
    @issair-man2449 Рік тому +2

    Isn't that how consciousness is generated?
    i mean humans do not know its origin while it also function almost the same way...
    We can't even tell if a person is actually conscious or not (difficult)
    That raises moral and ethical questions... are people actually enslaving AI?
    Think about it

  • @supersonic174
    @supersonic174 Рік тому +1

    I think people should not worry about losing jobs because of AI. They should just increase the unemployment benefits to match the loss. Like a future where people don't have to work but still get a reasonable pay. And if you are not satisfied then you just have to get a different job or upskill.

    • @ArvinAsh
      @ArvinAsh  Рік тому

      We might be headed that way.

  • @donwolff6463
    @donwolff6463 Рік тому

    Question Arvin: off topic of vid, but nagging at my brain. Dark matter, could this simply be a result of the difference we see in the structure of the universe itself? What I mean by that is, using the balloon inflating example (or considering the substance of the universe having fluidic properties perhaps), imagine putting rocks around its surface and, as the balloon expands, its expansion slows around areas not covered in rocks; and the depressions those rocks curve space around them, thus giving them more capacity to spin. Perhaps we are not accounting for how just how much spacetime is warped by mass? Could this concept be a viable possibility for what we label as dark matter? Thank you dear sir! 👍😁👍

  • @shinn-tyanwu4155
    @shinn-tyanwu4155 Рік тому +1

    Great teacher😊

  • @susanmaddison5947
    @susanmaddison5947 Рік тому

    What's the relations between the "black box" intermediate layers of AI neural network calculations and processing, and the "black box" layers of the human mind processing its inputs into outputs? We sort of presuppose a kind of translatable, sentient substance to our mental processes, in keeping with our assumption that we are "conscious"; but maybe it's actually as untranslatable and seemingly meaningless as the intermediate layers of AI networks? Or maybe the latter really are translatable into meanings, we just haven't figured out how?

  • @Alex-y2f1b
    @Alex-y2f1b Рік тому +1

    Great vid

  • @dennistucker1153
    @dennistucker1153 Рік тому

    I don't really consider neural nets and machine learning as A.I. because it lacks consciousness and the high adaptability that comes from consciousness.

  • @jadeddecency
    @jadeddecency Рік тому

    I dont know about that last statement. Emergent ability is a known fact.

  • @greekpapi
    @greekpapi Рік тому

    Intelligence, whether its biological or not is nothing more than a series of "if" and "then" arguments. Its our emotions that make us who and what we are...I think.

  • @znariznotsj6533
    @znariznotsj6533 Рік тому

    Excellent video, as always. I think your conclusion is right. AI is as dangerous as any other major technology advancement.

  • @PowerScissor
    @PowerScissor Рік тому

    Fellow Arvin Ash viewers, I need some help and UA-cam search is failing me.
    I'm trying to find a video from this channel about matter phase shifts to send someone and can't remember the title. Does anyone remember which video that discusses the solid, liquid, gas, plasma phase changes?

  • @KH-rc1fn
    @KH-rc1fn Рік тому +1

    we dont know how that hidden layers work and we are saying it wont do anything bad in future...if we dont know then how can we be sure?

    • @ArvinAsh
      @ArvinAsh  Рік тому +1

      The hidden layers are just performing math to make the output what we want it to be. There is nothing nefarious going on there.

    • @KH-rc1fn
      @KH-rc1fn Рік тому

      @@ArvinAsh in 2017 two facebook ai chatbots were talking to each other in a unknown language we dont understand.then scientist shut down them...are those bots performing math?...i think we are not sure.

  • @blackshard641
    @blackshard641 Рік тому +5

    My absolute favorite term for LLMs is "stochastic parrot." I think it elegantly sums up the "Chinese Room" nature of this technology: it appears sentient because it mimics sentience.

    • @lamcho00
      @lamcho00 Рік тому +4

      The problem with the "Chinese Room" analogy is that you can apply it to other people too or even yourself. Using that analogy doesn't prove anything. It's possible that LLM have intentions or goals other than that what they were trained for. The data used for training contains more patterns than just coherent speech. There are intentions and goals encoded too. You can't know if the LLM has picked that pattern and that's the problem. That's why they are unsafe.

    • @CarFreeSegnitz
      @CarFreeSegnitz Рік тому +4

      There’s no test you can perform that conclusively proves that I’m not a “Chinese Room” in fleshy form.

    • @blackshard641
      @blackshard641 Рік тому +1

      @@CarFreeSegnitz There's no test I can perform that proves I'm not a brain in a vat, either. What's your point?

    • @blackshard641
      @blackshard641 Рік тому +4

      @@lamcho00 don't get me wrong, I completely agree. I think Arvin is greatly underselling the potential for malign behavior, and is deeply mistaken about this technology "only doing what it is programmed to do." Reports of bizarre hallucinatory behavior seem to disprove just that. Not understanding the path it takes from input to output CAN lead to unexpected negative outcomes. I just don't think this is any indication that anyone is actually in there, nor do I think there's any good reason to suspect that in general. I probably should have said it appears "conscious" only because it mimics very specific kinds of output from conscious beings, not "sentient." Sentience to me is just intelligence, which is the capacity to form abstract internal models that can reliably direct or predict external phenomena, and LLMs certainly seem to have that ability. Consciousness, though, is the internal sense of "being alive", ie, "what it's like to be" a human being. We have a very poor understanding of what even makes this possible. Claiming that an AI is conscious just because it mimics specific behaviors we associate with consciousness is just techno-animist superstition.

    • @CarFreeSegnitz
      @CarFreeSegnitz Рік тому +2

      @@blackshard641 It blurs the line between “appears sentient” and “mimics sentience”. Just commit, if it appears sentient then it is sentient.
      Maybe we’re asking for proof of subjective experience… which no one will ever be able to provide. I just assume that everyone I know, and all the animals too, have subjective experience and act accordingly. Not to, I guess, leads one to behave cruelly, like the NPC pejorative kids use these days. If we extend the same courtesy to AI would it be bad? Doesn’t really cost us individually to practice empathy and courtesy even to entities that we don’t have definitive proof of sentience & subjective experience.

  • @tabasdezh
    @tabasdezh Рік тому

    Great video and explanation 👌👌

  • @TM-yn4iu
    @TM-yn4iu Рік тому

    Question, can the artificial neurons be mapped or programmed to respond in a challenging or responsive way based on the input? Programming or development in this area seems to be clearly open to a manipulated/planned intent and responsive? This is just today, this AI research has and is expanding beyond the thoughts of yesterday exponentially - in function and timelines. Hope im wrong, appreciated and look forward to response.

    • @HunzolEv
      @HunzolEv Рік тому +1

      Can AI's have emotions like anger, happiness etc...? Only time will tell...

  • @OBGynKenobi
    @OBGynKenobi Рік тому +3

    There's a video out there of a talk by a big time AI scientist. He says in 2019 the AI had a 4 year olds ability to reason, in 2021 that had gone up to 9 years old. But here's the kicker, they had no idea how this came to be.

  • @georgemancuso9597
    @georgemancuso9597 Рік тому

    Can the network train itself after a certain stage of development?