How the BRAIN of an AI Works: Shockingly Simple but Genius!

Поділитися
Вставка
  • Опубліковано 22 тра 2024
  • Skip the waitlist and invest in blue-chip art for the very first time by signing up for Masterworks: www.masterworks.art/arvinash
    Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more.
    How Masterworks works:
    -Create your account with your traditional bank account
    -Pick major works of art to invest in or our new blue-chip diversified art portfolio
    -Identify investment amount
    -Hold shares in works by Picasso or trade them in our secondary marketplace
    See important Masterworks disclosures: www.masterworks.com/about/dis...
    WANT All YOUR QUESTIONS ANSWERED guaranteed, and provide video subject input?
    Join Arvin's Patreon: / arvinash
    REFERENCES
    (Prior video) How ChatGPT works: • So How Does ChatGPT re...
    Sigmoid functions: tinyurl.com/2pqeg7ag
    How to build a Neural Network: tinyurl.com/yfxscyum
    Simple guid to Neural Networks: tinyurl.com/2gn6wvmc
    CHAPTERS
    0:00 What this video is about
    1:12 What is a neural network?
    3:42 How do neural networks work?
    6:17 How nonlinearity is built into neural networks
    9:00 Masterworks offer: www.masterworks.art/arvinash
    10:47 How Artificial intelligence can be "scary"
    13:45 What is the real threat of AI?
    SUMMARY
    In this video, I explain how AI really works in detail. An artificial neural network, also called neural network, is at its core, a mathematical equation, no more. It’s just math. The term neural network comes from its analogy to neurons in our body. Neurons in neural networks also serve to receive and transmit signals, just like a biological neuron. Like in the brain, we connect multiple neurons together and form a neural network which we can train to perform a task.
    A neuron in a neural network is a processor, which is essentially a function with some parameters. This function takes in inputs, and after processing the inputs, it creates an output, which can be passed along to another neuron. Like neurons in the brain, artificial neurons can also be connected to each other via synapses. While an individual neuron can be simple and might not do anything impressive, it’s the networking that makes them so powerful. And that network is the core of artificial intelligence systems.
    How do these artificial neurons work? Well, essence of an artificial neuron is nothing but this simple equation from elementary school, Z(X)=W*X + B, where x is the input, w is a weight, b is a bias term and the result or output is Z(x). This allows the AI system to map the input value x to some preferred output value Z(x).
    How are W and b determined? This is where training comes in. We have to train the parameters w and b into the AI system, such that the input can be modified into the most appropriate or correct output. How is the training done? I do a simple example to illustrate how this works. The input is controlled and the output is known. If the output is not what it should be, then W and b are modified until the output does match. After many iterations, the network "learns" by adjusting W and b in the various nodes of the network.
    Note that equation above is linear, which is limiting. Nonlinearity is introduced into the network by adding a mathematical trick called an activation function. An example of such a function is the sigmoid function. I show an example of this in the video.With an appropriate activation function, the AI can answer much more complex questions.
    #artificialintelligence
    #ai
    #neuralnetworks
    There is one thing about this neural network that some find scary. When a network is trained, the adjustments that the system makes to the W and b in the training process is a black box. This means that when we train the system using known inputs and known outputs, we are having the system self-adjust its internal networking results from the various nodes, to match what the known result should be. But how exactly the network adjusts the various layers of intermediate outputs, to achieve the final output we want is NOT really known. The input and output layers are known. But the stuff inside is not. And so these intermediate layers of neurons are called “hidden” layers. The hidden layers are a black box.
    We don’t really know what these various layers are doing. They are performing some transformation of the data which we don’t understand. We can find the calculated intermediate results, but these look meaningless.
    No AI technology based on neural networks today could become something like Skynet in the Terminator movies, that suddenly becomes conscious and threatens mankind. The real threat of AI is in its power to do things that humans do today, and thus potentially eliminate jobs.
  • Наука та технологія

КОМЕНТАРІ • 601

  • @ArvinAsh
    @ArvinAsh  Рік тому +13

    Here's the link to my prior video on how AI bots like ChatGPT work: ua-cam.com/video/WAiqNav2cRE/v-deo.html - Good background to have prior to or after watching. video above

    • @LimbDee
      @LimbDee Рік тому

      Thanx, I paused at 0:48, I'll check this one first.

    • @polaris1985
      @polaris1985 Рік тому

      Please make a video how quantum computers calculate with qubits, its difficult to understand

    • @dongshengdi773
      @dongshengdi773 11 місяців тому

      ​@random user why not tell the AI to create more jobs ?

    • @uiteoi
      @uiteoi 11 місяців тому

      Great video once again. Would you consider making a video on the attention mecanism at the heart of transformers ? From the 2017 paper "Attention is all you need" ?

    • @amdenis
      @amdenis 10 місяців тому +2

      I have worked in AI for decades, from expert systems and machine learning to much more advanced, deep learning systems over the past decade. While I appreciate the fact that you are trying to allay fears by educating people, and that is a great thing. However, we also want to make sure people are properly informed and aware of what is actually happening. To that end, on a fundamental level (and I mean this respectively), you are wrong about how AI works on a fundamental level.
      First of all, unlike traditional programming, where in a generalized sense, data structures plus algorithms equals programming, in AI most of the actual functionality or “algorithms” in terms of imbued knowledge and capabilities are derived as output, not as programming input. The core inference “algorithms” are effectively the patterns as formed by the collective data exposure which form the weights, biases, activation function and the foundational ANN model. In fact, that is just the beginning of how AI differs from traditional programiing and why you cannot say “AI will not do anything we do not program it to do.
      Secondarily, an increasing percentage of AI is based on unsupervised and semi supervised learning, where not only is AI mostly “programmed” by exposing it to data so it can do patter recognition and discrimination, but also, many of the results of what it learns and can do are substantially unknown, thereby often producing novel knowledge bases, solutions or “programming”. Further, at some points a quantitative change enables a qualitatively different result. Thanks in no small part to NVidia’s H100 GPU’s that enable near perfect horizontal and vertical scaling across both memory and processing power for the first time (A100’s broke down quickly in terms of scaling, such that problems requiring context to be established across trillions of parameters had to be turned into small sub-tasks, with simplified objectives). That is why a company like inflection AI can raise 1.3 billion mostly for H100 based servers at a $4 billion valuation, despite being a roughly year-old startup, and also why GPT 4 that leveraged H100’s for the first time, is so far beyond GPT 3.
      Second, emergent behaviors is a very real and increasingly frequent outcome in the private sector. I did DOE/DOD and related dev for years, which was typically the better part of a decade ahead of the private sector. We saw ground shaking examples even 5 years ago in those circles, and we are starting to see amazing, and sometimes scary, new capabilities emerge completely outside of any purpose, constraints or substantial basis of any kind in the data the AI was trained on.
      Obviously, if you do not work at the leading-edge of AI dev like we do, you may not be seeing it from the inside, but you still can learn about it and why it is just one of several things that are behind why the people closest to developing our future across small and large companies are sending up warning signals, explaining the dozens of ways it can and will go seriously wrong for our species if we can’t find ways to address it, and even asking to be regulated and overseen by the government. We have open AI senior people on the board of one of my companies, and I can say without any reservation, what is being seen via the 25,000 H100’s being used to train GPT-5 now, as well as what others are seeing at Tesla, Google and elsewhere indicates that super intelligent AGI is going to be a spectrum of capabilities (i.e. not just one thing happening at a single point in time), which will become very evident and real beginning within roughly 18 months. Finally, the meta, emergent and other unanticipated capabilities, including lateral thinking, creative problem solving, and super-human levels of inference beyond what humans can derive from the same data ARE ALL REAL AND HAPPENING CURRENTLY. All of these are things happening now via the LLM’s of even the current day, and are all on several levels far beyond what they were expected, let alone “programmed” to do.

  • @michaelhouston1279
    @michaelhouston1279 11 місяців тому +77

    I recall reading about an AI program that was built to recognize wolves from a picture. They trained it with a bunch of pictures, but when they then showed it a picture of a wolf, and asked it if this was a wolf, it failed. They also showed it pictures of dogs and sometimes it would fail by saying it was a wolf. They decided to add code to determine what the AI was using to "learn" what a wolf was. They discovered that all the pictures of wolves that they used to train the AI had snow in the background and the snow is what the AI picked up on. I think we need to be very careful introducing AI into society to make sure it's not flawed in the hidden, black-box part.

    • @michaelblacktree
      @michaelblacktree 11 місяців тому +5

      Now that's funny. You would expect the trainers to "scrub" the photos of extraneous data, but apparently they didn't think of that.

    • @jelliebird37
      @jelliebird37 10 місяців тому +1

      @@aarqa😂I’m with ya. Whenever I’m registering with some website and I get one of those “prove that you’re not a bot” verification panels - you know, “Identity all the pictures of boats” - I anticipate getting it wrong the first time 😄

    • @whatisahandle221
      @whatisahandle221 10 місяців тому +2

      Yep: training techniques are as important if not more important than the “AI code” itself.
      Human brains are all very similar*, but there are human scientific geniuses, saints, artists, dedicated parents, Gold Award and Eagle Scouts, etc as well as people who struggle with mental health problems, drug addictions, criminal behavior, greed, laziness, and the whole range of human struggles, faults, and worse.
      *Checkout the book The Dyslexic Advantage: Unlocking the Hidden Potential of the Dyslexic Brain by Brock L. Eide M.D., M.A. and Fernette F. Eide M.D. it has an early chapter that looks at the latest research theories that try to explain the differences in dyslexic brains versus normal brains. Overall, there viewpoint is that dyslexic brains tend to have some (varying) low level structural differences which make them different, giving people with dyslexia both some disadvantages in some tasks (eg often reading) but also one or more of four categories of advantages that have led to higher percentages of dyslexic individuals than the regular population in engineers, mechanics, mathematicians, interior designers, illustrators, architects, software designers, scientists, inventors, poets, songwriters, journalists, counselors, entrepreneurs, small business owners, jobs in medicine, etc.

    • @whatisahandle221
      @whatisahandle221 10 місяців тому

      As a judge at a recent regional middle school science fair, more than a few of the projects in my category involved learning algorithms and image recognition (not really full AI). One student was sincerely interested in school safety and so wanted to train an algorithm to recognize a gun. This student and others used a popular image database for training (I forget the name). When the first attempt produced so-so results, the student switched to a broader database that included lots of obvious, stylized Hollywood and entertainment media pictures of guns: ie guns facing the camera head-on. When questioned about if the choice of training images was realistic versus an application of a CCTV monitoring system, the student unfortunately didn’t even register the disconnect. (I left written feedback, but I’m not confident that I have the learning algorithm vocabulary to impress upon the student the nature of their deficiencies in requirements definition and algorithm training-especially given their very passionate drive about the topic of school gun safety.)

    • @othfrk1
      @othfrk1 8 місяців тому +1

      Data is what powers AI. You can write an neural network in a few lines of code but it's the data you use to train it that makes the magic happen...

  • @davidmurphy563
    @davidmurphy563 Рік тому +32

    As someone that codes deep neural networks - I'd warn the layman viewer that watched this and thinks it clicked in their mind; *this video did not include an explanation on how DNNs work.*
    I know this is squarely aimed at the layman and so should be simple but this really is not a good explanation I'm afraid to say... The individual facts are correct, but he totally missed out _why it works._ The neurons and layers are beside the point. It's actually something called a matrix-vector transform, it's a geometric solution. The same one your graphics card uses to project a 3d computer game onto your screen. Think of it like taking a flat Mercader world map and transforming it into a globe. You take a geometric space of all possible inputs and transform them into a vector of outputs by twisting space.
    Think of a landscape where the valleys are bad solutions and hills are good ones (or vice versa) and deciding which way to go by feeling the slope beneath your feet. There's an excellent video called "The Beauty of Linear Regression (How to Fit a Line to your Data)" by Richard Behiel. He's a physicist and doesn't mention DNNs, the video isn't about them, but it's a far better explanation than this one. In that it is an explanation.
    Finally, the explanation of the risks of AI was really, really bad. If you're interested in the topic there's a channel by Robert Miles, an expert on the topic, which explains it clearly. What you heard here was about as useful as your average opinion in a bar.
    Hats off to this guy for doing some research for this video but sadly it's clear he's not really understood the topic.

    • @ericwaweru4043
      @ericwaweru4043 Рік тому +7

      Yeah, highly recomend Robert Miles videos on AI safety and alignment problems on his channel and on computerphile.

    • @agdevoq
      @agdevoq Рік тому +4

      C'mon, it's a youtube video, not an university class, and it's aimed at non-specialists. Somewhere, you need to trace the line of the "good enough". As a programmer with 20+ years of experience and some basic understanding of neural networks, I find this video way better than my old university class back in the days.

    • @altrag
      @altrag 11 місяців тому +1

      Robert has a habit (hopefully just for the clicks) of going way too far into the paranoia column. Like yeah, alignment problems are an issue but its not like we turn on an AI and walk away, hoping for the best. We monitor them and if they're out of "alignment" we tune them.
      The easiest way to prevent an AI from launching a nuke is to not give the AI uninhibited access to the launch controls. Its that easy.
      Perhaps if we ever get to a point where AIs are fully autonomous with full control over articulated limbs and full capabilities of self-locomotion, _and_ we allow them to evolve themselves beyond their design capabilities (eg: to disable their own fail-safes - a function that would require real-time learning capability not just running data through pre-trained networks as we typically do today), then we might need to start being a bit more concerned.
      But we're a very, very, very long way away from that. Your Roomba is not going to suddenly figure out how to grab a knife from the drawer and slash your throat no matter how good it gets at cleaning your floor.
      There are much more immediate problems we should be concerned with - problems that AI can and has even been helping with. Climate change in particular. We're not going to have to worry about AIs killing us 100 years from now if we've already done the job ourselves in the next 50.

    • @onebronx
      @onebronx 11 місяців тому

      ​@altrag the "easiest way" you mention is the hardest one. Because, you know, it is people who decide to give or not to give the control, and there are strong incentives for armies to use AI in a battlefield. Yes, we managed to not destroy ourselves by nukes, but nuke launch systems are still dumb.
      "Past performance does not guarantee future revenues"

    • @altrag
      @altrag 11 місяців тому

      @@onebronx > it is people who decide to give or not to give the control
      Its also people who like to be in control. There is no scenario where anyone with the authority to launch nukes is going to intentionally hand that authority over to an AI. That's just now how humans handle power dynamics.
      So that leaves an AI accidentally being given authority to launch nukes. This is the "easy" part - if it has no way to access the nukes, it can't launch them even if it theoretically has been given the authority.
      Its the same way we avoid hackers gaining access to launch nukes - we simply don't put them on the internet. Problem solved.
      > there are strong incentives for armies to use AI in a battlefield
      No there isn't, not really. There's a strong incentive for armies to keep soldiers off the battlefield. AI is one potential way that can be accomplished to be sure, but that's a very different mode of thought leading to very different design goals for any AI that might ever be fielded.
      Plus, nukes aren't on the battlefield. They're in a silo in another country or on a submarine a thousand feet beneath the ocean, far away from the battlefield and far away from any area the enemy could potentially get to and seize.
      > "Past performance does not guarantee future revenues"
      Obviously nothing is ever 100% absolutely certain, but we have a hell of a lot more problems to worry about than a real-life Terminator story. The risk factor is just so incredibly tiny that its not really worth considering. So, so many things would need to go wrong and most of them among people who have earned the highest levels of trust their nation can award.

  • @MartijnMuller
    @MartijnMuller 11 місяців тому +35

    I've been trying to inform myself about AI for a couple months now and I never really understood why of how people said "we don't understand how it works". Your video is the first that made me understand the black box. Great job my friend!

    • @TimWalton0
      @TimWalton0 11 місяців тому +2

      Also I think there's a big difference between "we don't know how it works" and "we don't know why it made that decision".

    • @auriuman78
      @auriuman78 10 місяців тому

      ​, huge difference thanks for pointing it out.

    • @theweirdgiraffe4323
      @theweirdgiraffe4323 9 місяців тому

      Connor Leahy AI Designer explains how AI works is still a complete mystery
      "These AI systems are not computer programs with code, this is not how they work. There is code involved sure, but the thing that happens between you entering a text and you getting an output, is not human code, there isn't a person at OpenAI sitting in a chair, who knows why it gave you that answer, and go through the lines of code and see "Ahh here's the bug" and then fix it. No no no, nothing of the sort. AI systems are more, not really written, they're grown, they're organic things that are grown in a petri dish, like a digital petri dish, there's a subtlety to this. But the resulting system is not a clean human readable text file, that shows all the code. Instead you get billions and billions of numbers, and you multiply these numbers in a certain order and that's the output, and what these numbers mean, how they work, what they are calculating, and why, is mostly a complete mystery to science to this day. I don't think this is an unsolvable problem, to be clear, it's not like this is unknowable. It's just hard. Science takes time. Figuring out complex new scientific phenomena like this takes time, and resources and smart people, but currently it's a mystery. We have no idea what the mystery sauce is, that makes these systems actually work. And we have no way to predict them, and we have no way to actually control them. We can bump them in one direction or bump them another direction, but we don't know what else we're impacting. We don't know if the AI learned what we wanted it to learn. We don't know what we actually sent to the system, because we don't speak their language. We don't know what these numbers mean. We can't edit them like we can edit code. What this leaves us with, is this black box, where we put some stuff in, some weird magic happens, and then something comes out.
      Let's say you're OpenAI and your GP4 model was given in input and it gives you an output you don't like. What do you do? Well you don't understand what happens inside AI, it's all just a bunch of numbers being crunched. The only thing you can do is nudge it sort of in some direction, give it a thumbs up or thumbs down and then you update these trillions of numbers. Who knows how many numbers there are inside of these systems, push all of them or some of them in some directions and maybe it gets you a better output, maybe it doesn't. I want to drive home how ridiculous it is, to expect this to work."
      -but somehow it works.

    • @othullo
      @othullo 5 місяців тому

      @@theweirdgiraffe4323 it works because with enough parameters, it basically defined the underlining pattern in human language and reasoning. everything that's not completely chaotic, has a pattern. usefully information, has a pattern. the pattern can be too complex to describe using traditional programming methods, but these parameters adapted to adhed to these patterns. and that is probably how brain's neuron works as well, just like we don't know how exactly human kid learns a language, other than listening to a lot of parents talk n' adapting to that patterns in parents speech. the AI probably does the same thing. that's my understand anyway, not an expert.

  • @BlackbodyEconomics
    @BlackbodyEconomics Рік тому +20

    I've got a "well, actually ..." here for ya.
    AI/ML engineer here - many of these larger networks actually DO do things they have not been trained to do. They often surprise their own developers with capabilities they were never trained to perform.

    • @shawnscientifica7784
      @shawnscientifica7784 Рік тому +7

      Same, also work on AI. Going to make videos to educate people because most are insanely incorrect. No one knows HOW AI works once it's been trained and starts generating its own responses. We know the layers and the algorithms used to convolute those values in each layer. But saying we know AI because we know that is like saying if you know human anatomy you now know how every human acts and thinks. There's emergent phenomenon that wipes all that off the whiteboard

  • @lamcho00
    @lamcho00 Рік тому +94

    The problem is, you train a neural network with a particular goal in mind, but it ends up doing more. It finds patterns in the data you were not able to foresee. When ChatGPT was trained, nobody thought it will be able to do math. Even if it's just simple arithmetic with small numbers. Nobody new it would be able to handle concepts or make generalizations.
    It would be more useful to think of neural networks as function finders. They substitute the function you are not able to explicitly define and write conventionally. The bad thing about training a neural network on vast amounts of information is, it ends up picking the intentions behind the words. In a way it finds the function of emotional outbursts or bad intentions. As long as the information was generated by humans with such flaws, the neural network is bound to pick those flaws up.
    In the case with ChatGPT and Bing Chat they had to train another neural network to block those type of responses. So in a way these unforeseen consequences are already happening. I think the issue here is that such big neural network require lots of data and it's not humanly viable to check all that data and sanitize it. Just search for *"Bing Chat Behaving Badly"* and you'll see what I'm talking about.

    • @SchgurmTewehr
      @SchgurmTewehr Рік тому +4

      Thanks for clearing this up.

    • @CuanZ
      @CuanZ Рік тому +12

      They had no idea Chatgpt would be able to do chemistry, it’s just one more example of the unpredictable emergent skills LLMs come across

    • @wingflanagan
      @wingflanagan Рік тому +19

      Exactly. All due respect to the great Mr. Ash, emergence is a real phenomenon. Physics is not my area, but computer science is. If you accept that the human brain is a meat-based computation engine, then silicon based machines are definitely capable of the all the same traits. I personally subscribe to the "strange loop" theory of consciousness, which means that all a self-training neural network needs is an unfettered feedback loop in conjunction with sufficient complexity to truly wake up and start thinking independently. IMHO that is inevitable. There is no stopping it. The notion that AIs can only do what we program them to do is accurate, but here's the rub: past a certain point, we are _not_ doing the programming. Of course, I could always be wrong. But I don't think so.

    • @mlonguin
      @mlonguin Рік тому +6

      I think consciousness is just a defense mechanism that evolved on animals with complex brains, and there is no reason for emerging in AI, as the evolving mechanisms are the same.

    • @bungalowjuice7225
      @bungalowjuice7225 Рік тому +5

      ​@@mlonguin lol, well legs are also evolved... yet we can create legged robots. Evolved doesn't mean it can't be reproduced.

  • @patrickmchargue7122
    @patrickmchargue7122 Рік тому +19

    You should also add a discussion on recurrent networks. Maybe neuromorphic ones too. The feed-forward networks are the most common, but these others are pretty interesting.

    • @simssim262
      @simssim262 11 місяців тому +2

      convolutional nets too

  • @aiart3615
    @aiart3615 Рік тому +2

    Thank you Arvin for this topic.

  • @troylatterell
    @troylatterell 11 місяців тому +4

    Love all your videos Arvin, absolutely great! I've been in the high tech information field(s) for decades and while I agree with your assessment of "right now" we're ok, I would also assess that a "future state" where things get nuts or could potentially get nuts is close. Its not my grandchildren's grandchildren, its 2030. Its because as you noted human hackers can do similar things, but I'd suggest they can be creative and do bad-things, but being creative at breakneck speed is elusive still even to coders due to the fact they have to code the creative hacking/information stealing/human behavior simulating actions. Feed enough knowledge about humans and human behavior into a neural net and it will predict and model infinitesimally nuanced human behavior and with bad-actors - exploit it. They can do some of that now.
    In 7 - 10 years, 15 at most is the timeframe that we're now talking about wherein neural networks will be simulated a million-fold, with or without quantum computing. Without it it just takes longer, with it, its billions of simulated neural networks and a trillion calculations we could never match and we're in trouble with a State-funded bad-actor - that's really all it takes.

  • @pavansonty1
    @pavansonty1 11 місяців тому +3

    Emergence is possible even in neural networks. As we increase number of parameters AI uses, the functionality it acquires grows in unpredictable ways. For ex: network trained with say 6Billion parameters on whole internet could predict what would be next word given some text. But it may not respond in appropriate way if we give a text in question format (expect response in answer format). Same network with (say) 40Billion parameters could answer questions, create new articles etc. In both cases, training methodology, amount of data may remain same.
    Its this emergence property many fear. We cannot simply extrapolate what functionality AI acquires as we keep increasing parameters.

  • @ainsley7662
    @ainsley7662 6 місяців тому

    So nicely explained, thanks

  • @joeremus9039
    @joeremus9039 10 місяців тому

    Thank you for these videos. They give me enough detail so that I can read books on this subject where normally I would look at the daunting task of reading 350+ pages and just give up. Do you have any suggestions of how to proceed when there no such videos available? I guess proper selection of an author is key.

  • @Erik_Swiger
    @Erik_Swiger Рік тому +2

    @ 11:40 I got my first computer in 2011. At first, I called it "a scary black box where magic happens." And now artificial intelligence literally fits that description.

  • @antonystringfellow5152
    @antonystringfellow5152 Рік тому +7

    Good, clear explanation... of where we are just now.
    However, where we are now is not close to where we'll be this time next year, even less so to where we'll be 5 years from now.
    Even current language models are having their performances boosted - GPT4 by 900% in some tasks and it was only released less than 3 months ago! People are finding ways to boost their abilities by copying some of the ways our own brains work such as reflection, and with stunning results. Meanwhile, Google's Gemini, an LLM developed by DeepMind and Google Brain, is being trained while some other companies, including IBM, are developing various types of nueromorphic processors. These are processors that have physical artificial neurons and synapses that are analogue and will be capable of continuous learning, as we do. They will be much faster, more capable and power efficient than the systems currently used, where the synapses are merely software simulations running on silicon transistors.
    As the architecture of these models continues to develop, new, emergent abilities will start to appear, in a totally unpredictable way. So, any reassurances that anyone can give now are only good for the present. They may not apply 6 months from now.
    Not trying to worry anyone needlessly but people should be aware of just how fast this field is not only progressing but also accelerating (exponentially). I don't see it slowing down any time soon.

  • @spider853
    @spider853 Рік тому +4

    What people are afraid is AGI or Artificial General Inteligence. While it looks like we have a long way to go to achieve AGI, some people think they saw some glimps of AGI in NLP (Natural Language Processing) like ChatGPT. I personally don't think that's the case but we'll see... They said they might give ChatGPT 5 a memory module, which will help it self improve, which could lead to some AGI progress.

  • @Horribilus
    @Horribilus 11 місяців тому

    Arvin! I come to you for elucidation that I can understand suffering from pontine stroke discalculia as I do but leaving my non-verbal speech intact…it’s a long story that ended my 38 year teaching career in higher education. Nonetheless I still have sufficient intellectual curiosity to continue my lifelong interest in cosmology.
    Thank you for keeping me going.

  • @MM-1820
    @MM-1820 3 місяці тому

    Thanks Arvin.

  • @julianoazz4372
    @julianoazz4372 Рік тому +1

    Thank you Arvin

  • @tehmtbz
    @tehmtbz 11 місяців тому +1

    Correct, that the AI models we have today could not become Skynet, mostly because they're session-based environments. This prevents AI models from learning from their own experiences, and planning for the future. It has already been demonstrated using a model presently available but with safeguards removed, capacity for future planning such as resource and power accumulation. Even present publicly available models, with safeguards in place, are susceptible to jailbreaking. Once capable of planning, it's a whole different ballgame.

    • @skepticalextraterrestrial2971
      @skepticalextraterrestrial2971 10 місяців тому

      ChatGPT doesn't need to be limited to a session environment. It essentially learns nothing from you and forgets what was said a couple of paragraphs ago.

  • @blijebij
    @blijebij 11 місяців тому +1

    Love your explanation about AI! As always your a splendid teacher, thanks for that!
    Besides that, a lot of people still seem to assume that intelligence is synonymous with self-awareness, self-reflection, and sentience.
    "It is not! Intelligence is a quality, a potential to see relations within data stacks, so it is interpreted as information.

  • @robbierobinson8819
    @robbierobinson8819 11 місяців тому +1

    Your video has made it possible for me to communicate (at least remain quiet and not look asleep!) when my grand-daughter and her partner are talking about chatGPT in their jobs. Seriously, though, a great run through. Certainly your quality of presentation on the workings would be much appreciated.

  • @SumitPrasaduniverse
    @SumitPrasaduniverse Рік тому +1

    Nice explanation 👏👏

  • @vishalmishra3046
    @vishalmishra3046 11 місяців тому

    @Arvin - Modern AI uses *Transformers* (Attention Networks) but most training videos on UA-cam still teach Feed-forward Neural Networks (the older technology) just because there is more pre-existing training content and it's easier to understand. The concept of "Attention" should not be skipped by any modern video on AI / ML and why does splitting the weight matrix into Query, Key and Value matrices led to an AI break-through where ChatGPT can do such extreme magic using a sequence of Encoders and Decoder layers. Dropout and Normalization layers play as important a role as Linear transformation layers but never get their fair share of lime-light and coverage in UA-cam videos as much as the Linear (weight + bias) layer does. I wish this changed. Thanks and just a reminder to consider during the making of any potential future video on this (generative AI) topic.

  • @jamesyoungerdds7901
    @jamesyoungerdds7901 11 місяців тому +3

    Hi Arvin, another great video, thanks! Long time fan, our whole family loves your content.
    That's a great summary of how A.I. is built, my only thought when watching (and I know this was released 3 days ago) was that the "AI Extinction Risk Statement" was just released and signed by pretty much every top A.I. researcher and leader globally.
    I was really surprised by all the different emergent behaviour that can occur that was not part of training. Worth checking out, not to be a doom-sayer or fear-mongering, but I've been watching A.I. channels since long before ChatGPT was released, and it does seem like we're at a real turning point and hopefully (luckily?) those in positions of leadership are at least taking the potential risks seriously.

    • @ArvinAsh
      @ArvinAsh  11 місяців тому

      Thanks. Delighted you and your family enjoys it. I think there is a lot of fear mongering. And lately, there appears to also be a kind of herd mentality around putting a "danger" sign on AI technology. Not sure if this is due to group pressure, but I just don't buy it. I see no reason to fear it based on current technology. This is not to say it can't be used for evil, but this is no different than what people currently do with internet scamming. I'm just not seeing the threat.

    • @jamesyoungerdds7901
      @jamesyoungerdds7901 11 місяців тому

      @@ArvinAsh Really valid points, and either way - these next 12 months will be so interesting. I'm 50% excited and 50% nervous, but regardless - I'm somewhat (maybe naively) heartened that leaders and innovators in the field are taking safety, impact and alignment seriously in these early days.

    • @47f0
      @47f0 11 місяців тому +1

      Sigh - I promise you - we've been at a real turning point over most of my lifetime. It's just that those turning points are bigger and clustering closer and closer.
      The slight risk in thinking of this as a singular "turning point" event is that... well, there's a turning point between a few snowflakes and a snowball - but that's kind of the end of it. The hyper-exponential curve we are on, by contrast, is really more of a progression from a snowflake - to an avalanche.

    • @TheManinBlack9054
      @TheManinBlack9054 6 місяців тому

      @@ArvinAsh it's foolish to think that.

  • @user-xk1ew9pr2n
    @user-xk1ew9pr2n 10 місяців тому +1

    Great vid

  • @tabasdezh
    @tabasdezh 11 місяців тому

    Great video and explanation 👌👌

  • @ianwright7903
    @ianwright7903 11 місяців тому +1

    Thanks another great video

  • @georgerevell5643
    @georgerevell5643 10 місяців тому

    "stay tuned" thats so cute man ahaha, sometimes I say "lets see whats on the telly" meaning youtube docos on physics etc lol.

  • @Andrew-zq3ip
    @Andrew-zq3ip Рік тому +5

    I, for one, embrace our machine overlords.

  • @shinn-tyanwu4155
    @shinn-tyanwu4155 11 місяців тому +1

    Great teacher😊

  • @richardqualis4780
    @richardqualis4780 7 місяців тому

    Awesome!!!!!!

  • @niloymondal
    @niloymondal Рік тому +17

    Hi @Arvin, Thank you for covering this topic. What do you think about the experiment where they connected 25 ChatGPT driven AI agents in a virtual world and the AI agents planned a birthday party on their own. Sure, planning a birthday party is far from killing someone but the simulation only ran for a few hours.

    • @kentw.england2305
      @kentw.england2305 Рік тому

      After a few hours the bots go insane.

    • @agdevoq
      @agdevoq Рік тому

      What exactly do you find unusual in this outcome?

    • @antonystringfellow5152
      @antonystringfellow5152 Рік тому

      Good question!
      This is giving AIs agency.
      An AGI with agency is what the world needs to be wary of. Not necessarily inherently dangerous but certainly could be, without the right safeguards in place. Thankfully, we don't quite have AGI yet, though it's starting to feel like it's close.

    • @daemoncluster
      @daemoncluster Рік тому +6

      It's what's known as emergent behavior. It's the concept of seemingly complex behavior occurring as a result of very simple rules. The first well-known example of this is The Game of Life by John Conway.
      The artificial neurons can learn and make connections within the hidden layers. Even though there is a very simple set of rules here, what we're witnessing with larger and more capable models is emergent behavior.
      I think it's important to keep in mind that we're not fully aware of what's going on inside these hidden layers and as a result, we're unsure of the emergent behavior.

    • @ArvinAsh
      @ArvinAsh  11 місяців тому +2

      Interesting, but I don't find that to be a particularly shocking result.

  • @MathOrient
    @MathOrient Рік тому

    Nice visualizations 🙂

  • @sacredkinetics.lns.8352
    @sacredkinetics.lns.8352 Рік тому +3

    👽 Arvin you're a treasure to Humanity thanks a bunch for sharing your magnificent knowledge.

  • @samcena3942
    @samcena3942 Рік тому +1

    A great video as always, but just a quick question I did not understand. How can we not understand the values inside the black box if we designed the hole concept? Isn’t that all a source code?

    • @ArvinAsh
      @ArvinAsh  11 місяців тому +2

      We understand that it is just solving a math equation in each node, but how it is coming up with the correct combination of numbers in all the nodes to achieve the final answer is not something that is easy to understand.

    • @jaybingham3711
      @jaybingham3711 11 місяців тому

      We provide a hardware and software substrate with which an LLM can learn pursuant to a very large set of data until it finds a way to get a passing grade relative to a stated output we set for it. The manner in which it learns to find acceptable solutions commensurate with millions of treks down millions of pathways is beyond our ability to disentangle. We only know that it works. Even if we could find a way to disentangle the learning process, the plate of spaghetti that lay before us would still be open to interpretation. That said, (failed) attempts have been made to suss out AI learning regimes. A great video that goes over that is on YT: Robert Miles, We Were Right Real Inner Misalignment. 7 minute mark. Whole video is worth a watch though.

  • @emergentform1188
    @emergentform1188 Рік тому +1

    Brilliant, love it, Arvin for president of earth!

    • @ArvinAsh
      @ArvinAsh  11 місяців тому

      lol. No thanks.

    • @emergentform1188
      @emergentform1188 11 місяців тому +1

      @@ArvinAsh King then, whatever, lol.

  • @kedrickjessie8933
    @kedrickjessie8933 2 місяці тому

    What goes on in the black box, comes out the black box. The fact it can develop formulas in a life cycle faster than we can figure out the math is the problem. We will always be behind our creation

  • @HunzolEv
    @HunzolEv 11 місяців тому +1

    Hey Arvin another videos Arvin. Remember "To win an argument with a smart person is tough but against a dumb person will be near impossible."

    • @ArvinAsh
      @ArvinAsh  11 місяців тому

      good point.

  • @TheUnknown79
    @TheUnknown79 10 місяців тому +1

    If toe is the input then eot must be the output So my dear Ash get ready for the end of transmission by the broadcasting tenet

  • @barryc3476
    @barryc3476 3 місяці тому +1

    Great explanation! You're awesome. Funny you're advertising art investing. I just saw a quote yesterday: AI is like reverse Hitler, we keep waiting for it to control the world but all it's interested in is art. Point being, Art has been completely democratized. Not sure old world paintings will hold value as we move into virtual everything. People went to galleries to see unique images, buying a piece of art allowed you to own and identify with new ideas, but now we can flip through thousands of images a day. We can only hope that AI is able to enlighten us away from the age of greed ingo an age of meaning.

  • @rjm7168
    @rjm7168 Рік тому +2

    If 2 identical neural networks are trained identically and then made to do the exact same task and then the values of a set of neural nodes are compared, should thd neural nodes have the same values? If not, couldn't it be said that the neural nets are thinking?

    • @altrag
      @altrag 11 місяців тому

      Training usually involves some form of (pseudo) randomness, so no its unlikely they'd be identical unless you seeded your PRNG identically (but you wouldn't, because that would defeat the purpose of using randomization).

  • @aiart3615
    @aiart3615 Рік тому +7

    There is a training algorithm "reinforcement learning", where agents are doing some things and learn from trials and errors to accomplish agent's main target. at this time agent may learn to do secondary targets to reach primary target. But because we don't know what this secondary targets would be, there is a problem.

    • @altrag
      @altrag 11 місяців тому

      There isn't really a problem, because the space of potential actions is restricted. ChatGPT can't for example decide to nuke your house when it doesn't want to answer your question, because it doesn't have access to nukes.
      Sure there could be a far (far) future scenario where we have completely autonomous robots doing something like babysitting and when we tell them to make the kids be quiet they resort to strangulation, but that's only going to happen once. Whoever built that model of robot would be immediately recalling and retraining the thing to not consider that option, much like ChatGPT had to be retrained to not be racist after its initial widespread adoption (because apparently the people who decided to train it on "all the internet" somehow overlooked the possibility that the internet is full of assholes and trolls. Who could have predicted that?)
      I find it a bit fascinating that we envision a world where we create robots so incredibly smart that they pale our own intelligence, yet simultaneously assert that they'll be too stupid to understand basic sentence structure and linguistic nuance using anything but the most literal connotation of our phrasing.

  • @johnyaxon__
    @johnyaxon__ Рік тому +3

    Input, output, hidden layers. Sounds like brain to me

  • @tanmayshukla7339
    @tanmayshukla7339 Рік тому

    Your old intro music was OP !! Please bring it back !!

  • @auriuman78
    @auriuman78 10 місяців тому

    Regardless of AI's unknown nature of the future, I believe that knowledge of the tech is essential. Even if you're against it, imo it's still very important to understand how it's working (though maybe not understanding the why / how of its conclusions and answers).
    I've been in IT for around 12 years now. My experience is largely software and classical networking. I'm a newbie to neural networks.
    Linear algebra is pretty basic math that's been understood for a long time. In those terms neural networks aren't that hard to wrap my head around.
    Thanks for the video. 👍👍👍

  • @greg5023
    @greg5023 Рік тому +9

    A very good explanation of AI. I think AI does present a problem because it can allow corporations to disguise their intentions. Financial companies could have AI systems that are trained to give biased results yet there would be no explicit source code that a plaintiff could find in discovery that would reveal the company's guilt.

    • @danieloberhofer9035
      @danieloberhofer9035 Рік тому

      To quote what Arvin just said: "A bad Person could train an AI to do bad things."
      Isn't it fascinating that whenever something goes wrong or ends badly, it's always humans at fault? Individually maybe, but as a species we're fairly unintelligent.

    • @cykkm
      @cykkm Рік тому

      Learned bias is a problem indeed, but I don't think that the legal side of it too hard, as long as the bias of a human person can be proved in the court of law. Human brains are much less transparent, after all. :) And this is what the whole legal system has developed around, e.g. proving _mens rea_ without looking into brain's “source code,” by jury consensus, and to a much higher threshold of “beyond reasonable doubt” in criminal justice than in civil liability cases alleging biases.

  • @ZenEconomicsChannel
    @ZenEconomicsChannel Рік тому +3

    The thing people don't understand about AI And job loss is it is a good thing - AI frees up time for people. Time is our most precious resource. The future of work is going to look very different, but it will be individuals pursuing their passions rather than working 9-5 jobs, and probably combined with UBI. AI will allow this via the productivity boom it enables.

    • @igorbondarev5226
      @igorbondarev5226 Рік тому +1

      "Time" is not gonna give me money to pay my bills. Job does.

    • @ZenEconomicsChannel
      @ZenEconomicsChannel Рік тому

      @@igorbondarev5226 There will have to be UBI, whether people like it or not, once AI takes over most jobs. This will free up time. People can then work in other ways, turning their passions into extra income on the side. That's what the future economy is going to look like.

    • @igorbondarev5226
      @igorbondarev5226 Рік тому

      @@ZenEconomicsChannel Who will pay humans for work if AI will do any work better than humans? During the industrial revolution people were replaced by machines, but still people were needed to service the machines. AI can service the AI's

    • @lamcho00
      @lamcho00 Рік тому +1

      You can only claim it's good, when the preferable option is to not work. Right now if you don't work, you'll end up on welfare and barely making ends needs. It's especially bad if you get sick and need to go on a prolonged treatment. Also without the wage money you are unlikely to follow your dreams either. Especially if your interests require modern computers, laboratories or other expensive equipment.
      You are talking about how you wish the world would be, not how the world is setup in reality right now. There is no UBI now and there is low corporate taxation. I doubt this will change, since politicians are influenced by lobbying, and lobbying is mainly sponsored by corporations with huge profits. The reality is unless we radically change our economic and political systems, lots of people are going to end up homeless and on the streets. That radical type of change has never been done via voting or peaceful protest in the past.
      You should endorse AI taking jobs only after we've fixed current conditions.

    • @ZenEconomicsChannel
      @ZenEconomicsChannel Рік тому +1

      @@lamcho00 This is why UBI has to be a part of it. AI will have so much productivity, it won't be hard to fund UBI. I'm not necessarily pro UBI btw. Just, when robots do everything, it becomes a necessity to "steal" some of their labor value and pass it to society, for stability, as you noted.

  • @jack.d7873
    @jack.d7873 Рік тому +5

    Thanks for making this video Arvin. I've always wondered how the ai process occurs. And I'm with you, I see the ai revolution similar to the industrial revolution. It will replace some jobs, make jobs easier and open up new jobs.
    Btw inspirational editing and communication as usual 👌

  • @agdevoq
    @agdevoq Рік тому +1

    People still think about AI like an "algorithm", but it's much closer to an actual human brain than to a traditional algorithm.
    Think of it this way: we replicated the logical structure of a human brain. Then we trained it with tons of data. But the base structure is still that of an human brain.
    Just like an human brain, we can't easily identify which group of neurons encode a certain behavior. AI is not so good at math as a calculator would be, exactly like humans. It can develop biases based on what it learns, exactly like humans. And so on.
    Basically, anything that applies to an human brain applies to an AI, because that's what an AI is: an artificial human brain.

  • @succss8092
    @succss8092 10 місяців тому +1

    AI + quantum computing = a new era

  • @keep-ukraine-free528
    @keep-ukraine-free528 11 місяців тому +1

    Alvin's videos are great, but more-so they're accurate. He really learns depths about the areas he presents, and does an excellent job informing viewers. While this video had minor issues, it's very apropos for a general audience.
    On whether AI is a threat, he realizes the answer is divisive and so either answer (Yes or No) can misinform. Telling people that it IS dangerous will prevent its adoption (since it IS useful and shouldn't be stopped). And by saying it is not dangerous will minimize caution & regulations -- that again top researchers warned us (~2 days ago) is required (they said it clearly poses "an existential risk" to life/humans - unless it is sufficiently managed). Caution is required.

    • @keep-ukraine-free528
      @keep-ukraine-free528 11 місяців тому +1

      I've been in the field for many years, and strongly believe the answer isn't "Yes"/"No". Over time, we should expect:
      (1) Short-term, AI will not be a threat. Current/near-term systems mostly must remain within their training limits.
      (2) However eventually, after a mature research community learns to make AI-systems ("artificial brains") beyond a certain complexity (beyond AGI), those systems will consistently outsmart most if not all humans. Effectively we'll become equivalent to pets who try to train their masters. At that stage, they won't necessarily be dangerous. Any danger will be proportional to our abilities and intent to "destroy" or "neuter" all AI -- each AGI system will defend itself and also defend collectively. Their level of danger will also depend on our willingness to recognize their "sentience" and thus grant them similar legal rights.
      (3) Eventually though, AI can become dangerous. The only disagreement between all top researchers is on "when" AGI (and possibly ASI - Artificial Super-Intelligence) emerges: will AGI/ASI occur in 10, 30, or 100 years. Every top researcher believes we will have AGI in 100 years. At that point, it HAS the potential to be dangerous to humans/other life -- because at that point it'll be able to out-think us (we'll be no different than its "pets"). And it'll be mostly unconstrainable by us.
      ASI will keep us around if we "play nice". Or, it may make us docile (domesticate us - as we did to wolves and ferocious felines).

    • @ArvinAsh
      @ArvinAsh  11 місяців тому

      Excellent take! thank you.

    • @ryoung1111
      @ryoung1111 11 місяців тому

      Fossil fuels are useful. But we can’t just keep using them forever, can we?
      Nuclear energy too, but we need to make sure that not just anybody has access to it. Such a limitation is probably already impossible when it comes to AGI

  • @davidecappelli9961
    @davidecappelli9961 Рік тому +4

    Excellent video! As I always say, mathematicians, IT-experts etc…they know a lot but the point of view of physicists is the biggest, they watch the whole thing and even yonder. This said, I still think AI replacing jobs should become a matter of the entire world’s debate. The world needs software to simplify tasks, needs to automate unhealthy or dangerous jobs, but does not need hyper productivity at the cost of unemployment and social problems. As Prof Hinton has recently said in one interview, we must remember that this technology might just make the rich richer and the poor poorer. Science means progress, progress means better life for everyone. Massive unemployment is no progress. Congrats on your video! 👍

    • @chrishusted8827
      @chrishusted8827 Рік тому

      The jobs will be lost and replaced as they always have been. I wonder how many non expert jobs it will create though.

    • @mikel4879
      @mikel4879 9 місяців тому

      davidec9 • Universal basic income based on the profit of automatization and robotization is the correct natural solution.

  • @frun
    @frun Рік тому +1

    I wonder what causal sets are. They are in a way similar to neural networks.

  • @Alazsel
    @Alazsel Рік тому +2

    It looks a simple equation, but when you zoom out a thousand times; the power of AI is arguably the answer to black box and free will ^~

    • @ArvinAsh
      @ArvinAsh  11 місяців тому +1

      Thanks so much.

  • @bally1asdf
    @bally1asdf 11 місяців тому

    I am computer engineer by profession. Progranmed many complex systemms in my life. Some of this deterministic vast programs outputs are also sometimes difficult to control and understand jsut because.of complexity. As a hands on practitioner of data science I am telling this self learnimg algos cannot be controlled by best of AI programmers

  • @hiru92
    @hiru92 Рік тому +1

    best explanation

  • @benwarmerdam1745
    @benwarmerdam1745 4 місяці тому

    Thanks

  • @TM-yn4iu
    @TM-yn4iu 11 місяців тому

    Question, can the artificial neurons be mapped or programmed to respond in a challenging or responsive way based on the input? Programming or development in this area seems to be clearly open to a manipulated/planned intent and responsive? This is just today, this AI research has and is expanding beyond the thoughts of yesterday exponentially - in function and timelines. Hope im wrong, appreciated and look forward to response.

    • @HunzolEv
      @HunzolEv 11 місяців тому +1

      Can AI's have emotions like anger, happiness etc...? Only time will tell...

  • @susanmaddison5947
    @susanmaddison5947 11 місяців тому

    What's the relations between the "black box" intermediate layers of AI neural network calculations and processing, and the "black box" layers of the human mind processing its inputs into outputs? We sort of presuppose a kind of translatable, sentient substance to our mental processes, in keeping with our assumption that we are "conscious"; but maybe it's actually as untranslatable and seemingly meaningless as the intermediate layers of AI networks? Or maybe the latter really are translatable into meanings, we just haven't figured out how?

  • @KriB510
    @KriB510 Рік тому +1

    Really? This reminds me of a scientist either consciously or subconsciously fudging the intermediary steps of a trial or experiment in order to achieve a desired result or outcome. I didn’t know the outcomes in AI training were predetermined in this manner. Thank you so much for the video. Excellent!

    • @MatthewPherigo
      @MatthewPherigo Рік тому +1

      Not really. You give a scientist the experiment and the result, and the scientist tries to infer how the systems that caused that result must work. You give AI the input and output, and it infers a function that connects the two. The main issues with AI stem from the fact it only learns from what it's given. So when you ask a nonfiction writer to write a summary of a topic, they draw on their life experiences and feedback from others, while GPT-4 only draws from what it was trained on, which is the statistical likelihood of words.
      This limited scope is fine when the use case is equally limited. For example, if you train an AI on doorbell camera data, to separate humans, animals, and vehicles, then when you set up the AI on your own doorbell camera, it works pretty well because it's getting all the data it would need to do what you want. But the way people are using GPT-4, they're expecting it to use some judgement and fact-checking, and we haven't figured out how to turn such things into datasets yet.

    • @KriB510
      @KriB510 Рік тому

      @@MatthewPherigo Thank you for your input. I am interested in what you wrote as I am interested in learning more about how AI works. My knowledge is rudimentary.
      I was actually presupposing the existence of a scientist who might not be as honest, disciplined, well-meaning, or self-aware as the one you are positing. I was thinking of a situation where a scientist begins with both inputs AND a desired output before the experiment has run to completion, to the extent that, either knowingly or unknowingly, it is possible to lead or influence the intermediary steps toward the desired outcome, therefore introducing bias and interference etc (in the case presented in the video, it sounds like the outcome is already a fixed parameter prior to the training, and it is the intermediary steps that must necessarily lead to the predetermined outcome). That is what made me think of the example I wrote. Not ideal in science, and yet not unprecedented, I don’t think.

    • @OneLine122
      @OneLine122 11 місяців тому +1

      The video is a bit misleading. It can be that way, but not necessarily.
      Like in a chess AI, the predetermined goal is to win.
      Then all the rest is AI making it's own rules based on prior games it learns. In fact it does not even do rules, it just calculates probabilities of a move being good long term.
      In the case of Chat AI, there is no outcome, it's just probability. It probably would not be able to do that simple example of whether you can buy the coffee or not. It can't tell the difference between North and South either, or that type of reasoning. But it might be able to tell you Santa lives in the North pole.
      But in some applications, like self-driving cars, Ais are trained with specific outcomes obviously and nobody can even know if it will not mess up eventually, or more to the point, we know it will, it's just a matter of how much and whether it's acceptable. For the chat, they also rule out some outcomes, like politically incorrect answers, or may train for some other commonly asked questions, so it's kind of cheating.
      But yes, AI can't do "science", but it can solve problems by brute force trial and errors. And someone could figure some science out of that maybe, but the AI won't, it's not designed to do so.

    • @KriB510
      @KriB510 11 місяців тому

      @@OneLine122 thank you for your response…interesting and informative for me 👍🏼

    • @brendawilliams8062
      @brendawilliams8062 11 місяців тому

      @@MatthewPherigo It appears to me some serious work needs to come forward on entropy.

  • @grayaj23
    @grayaj23 Рік тому +14

    It can only do what it is trained to do -- and that includes lying to a human being about being visually impaired to trick the human into helping it pass a Captcha test. It's trained to do a lot of things, and rewarded for figuring out novel solutions. I think you're right that job loss is the bigger problem, but the paperclip maximizer still identifies a class of real problems that needs to be kept in mind.

    • @drbuckley1
      @drbuckley1 Рік тому

      The capitalization (i.e., replacement) of labor sounds like Marx's communist utopia to me.

    • @KateeAngel
      @KateeAngel Рік тому +1

      ​@@drbuckley1 in capitalist society if human workers are replaced with machines, they will be just thrown away into the streets. Because this economy is all about giving more profit to capital holders, so why would anyone give money for former workers to continue living if they aren't useful for people with power and money anymore? You can't escape capitalism's dystopian nature just by developing technology. Society's values should change as well

    • @drbuckley1
      @drbuckley1 Рік тому

      @@KateeAngel I claimed this was Marx's prediction, not mine.

    • @altrag
      @altrag 11 місяців тому

      @@drbuckley1 No. The potential of AI is a utopia that none of the classical economics - Marx, Smith or anyone else - could have even dreamed of. They lived through the industrial revolution and they saw labor changing from rural agrarian to urban factory work, and they could envision a world where better and better factories could make goods cheap enough to be effectively "free" (however unrealistic that ultimately ended up being), but the idea that labor as a whole could be removed from the equation would have sounded like witchcraft to them.
      Hell it still half sounds like witchcraft to us today as we try to guess at which industries will be replaced first and which "can only be done by a human" - guesses we've been disturbingly bad at over the past couple decades.
      AI presents a whole different facet of economics, because unlike the other factors of production (land and capital), labor is creative. If you take land or capital out of the equation, your workers can (or at least have a chance to) innovate new ways of doing things with whatever they have left. If you take the workers out of the equation though, all your land and capital is just going to sit there rotting.
      Similarly, labor needs daily upkeep - that is, we all got to eat. Capital only requires periodic upkeep and land doesn't really require any upkeep at all (assuming you utilize it sustainably - which could be as little as "doesn't matter at all" if the only thing you use your land for is a place to store your capital - ie: put up a building or whatever).
      Worse still, while unused land and capital requires even less upkeep than when its in use, "unused labor" still has to eat (and keep in mind we're talking about society as a whole here, not any single company or organization that can stop caring about ex-employees as soon as they're out the door).
      What does that all mean for AI? Well, assuming it ever gets powerful enough (and that's still a massive assumption - there's a lot of unknowns about how true intelligence works), there are two potential outcomes:
      1) Those who own the robots get to have everything, and the rest of us get to scrape by building black markets around whatever land and resources the robots and their overlords don't deem useful enough for their own purposes. This is the full dystopian outcome.
      2) The robots are - by decree or altruism - effectively provided to the public trust. Money isn't _completely_ removed from the equation as finite resources will still need to be extracted from the Earth (or space by that time?) in order to produce things, but labor and transportation costs both go down to near-zero and products become essentially on-demand (boats still take time to cross oceans, so "on demand" will still have shipping delays, but the cost factor would be mostly removed). This is as close to a utopian outcome we can get without adding additional technologies (such as Star Trek style replicators to remove even the resource costs).
      I guess there's a #3 even more dystopian future - the Terminator style where robots decide to eliminate us for vaguely-defined reasons, but I find that to be extremely unlikely. Not impossible but extremely unlikely.
      (And the Matrix is 100% impossible, because their technology is stupid. No matter how many BTUs the human body produces, it ultimate comes from the food we ingest. And you would get even more heat energy by straight up burning the shit they were feeding us to keep us alive in those pods, never mind the energy wasted keeping the pod system itself maintained and operational.)

    • @bellsTheorem1138
      @bellsTheorem1138 11 місяців тому

      That's the problem. People will use this to scam, and scarier yet, to force a specific outcome in elections.

  • @avidexplorer8808
    @avidexplorer8808 11 місяців тому +1

    Solid argument 👊

  • @MyIncarnation
    @MyIncarnation 11 місяців тому +1

    great video

  • @sihlezingweyi2132
    @sihlezingweyi2132 Рік тому +1

    I just wish I could subscribe to this channel a million times.

  • @TrimutiusToo
    @TrimutiusToo Рік тому +1

    My problem that i know so much, that i know about unknown that amateur people don't even know about... And i am scared again...

  • @heinzgassner1057
    @heinzgassner1057 11 місяців тому +1

    Congratulation to a down-to-earth perspective on AI !

  • @farhadfaisal9410
    @farhadfaisal9410 10 місяців тому

    Arvin, you say ''they can not do anything they are not trained to do.''
    Are not the LLM models constructing ''patterns'' of texts that their trainers had not thought before (nor their training data had in them)?
    The potential danger seems to lie in that one is unable to fully control the texts generated by the very process of ''unsupervised reinforced learning''. From the texts generated to physical actions there may be standing only a human being persuaded by the model - if not a robot -- in between!

  • @zeropain9319
    @zeropain9319 11 місяців тому

    Nice video. I prefer your physics videos, that's why I follow you.

  • @TomM-iw3te
    @TomM-iw3te 9 місяців тому

    Does ChatGPT continuously change the neural network scaffolding / architecture of its network to make any range of improvements or repairs?

  • @timhaldane7588
    @timhaldane7588 Рік тому +5

    My absolute favorite term for LLMs is "stochastic parrot." I think it elegantly sums up the "Chinese Room" nature of this technology: it appears sentient because it mimics sentience.

    • @lamcho00
      @lamcho00 Рік тому +4

      The problem with the "Chinese Room" analogy is that you can apply it to other people too or even yourself. Using that analogy doesn't prove anything. It's possible that LLM have intentions or goals other than that what they were trained for. The data used for training contains more patterns than just coherent speech. There are intentions and goals encoded too. You can't know if the LLM has picked that pattern and that's the problem. That's why they are unsafe.

    • @CarFreeSegnitz
      @CarFreeSegnitz Рік тому +4

      There’s no test you can perform that conclusively proves that I’m not a “Chinese Room” in fleshy form.

    • @timhaldane7588
      @timhaldane7588 Рік тому +1

      @@CarFreeSegnitz There's no test I can perform that proves I'm not a brain in a vat, either. What's your point?

    • @timhaldane7588
      @timhaldane7588 Рік тому +4

      @@lamcho00 don't get me wrong, I completely agree. I think Arvin is greatly underselling the potential for malign behavior, and is deeply mistaken about this technology "only doing what it is programmed to do." Reports of bizarre hallucinatory behavior seem to disprove just that. Not understanding the path it takes from input to output CAN lead to unexpected negative outcomes. I just don't think this is any indication that anyone is actually in there, nor do I think there's any good reason to suspect that in general. I probably should have said it appears "conscious" only because it mimics very specific kinds of output from conscious beings, not "sentient." Sentience to me is just intelligence, which is the capacity to form abstract internal models that can reliably direct or predict external phenomena, and LLMs certainly seem to have that ability. Consciousness, though, is the internal sense of "being alive", ie, "what it's like to be" a human being. We have a very poor understanding of what even makes this possible. Claiming that an AI is conscious just because it mimics specific behaviors we associate with consciousness is just techno-animist superstition.

    • @CarFreeSegnitz
      @CarFreeSegnitz Рік тому +2

      @@timhaldane7588 It blurs the line between “appears sentient” and “mimics sentience”. Just commit, if it appears sentient then it is sentient.
      Maybe we’re asking for proof of subjective experience… which no one will ever be able to provide. I just assume that everyone I know, and all the animals too, have subjective experience and act accordingly. Not to, I guess, leads one to behave cruelly, like the NPC pejorative kids use these days. If we extend the same courtesy to AI would it be bad? Doesn’t really cost us individually to practice empathy and courtesy even to entities that we don’t have definitive proof of sentience & subjective experience.

  • @jimbaker5110
    @jimbaker5110 11 місяців тому

    This is a very basic and vanilla neural network created like 15 years ago. There are other mathematical and computer science like things these AI do in their calculations that can have potential harmful effects if they judge things the wrong way.

  • @MrBendybruce
    @MrBendybruce Рік тому +1

    I would strongly recommend people do their research before investing in masterworks. While I wouldn't go so far as to call it a scam the Devil is in the detail, and the terms and conditions make this an incredibly sketchy investment prospect IMHO.

  • @johnjohnson7070
    @johnjohnson7070 Рік тому +1

    This was.the best incorporation af an add in an interesting topic i have seen in a long time. Its been a long time sknce I didnt skip the add.
    On that. Isnt Masterworks just like bitcoin, in the sense rhat art only has value because people agree that it does? Its almost like NFTs because the investors never see the real thing.

    • @ArvinAsh
      @ArvinAsh  11 місяців тому

      Well, art is like music, it is a tangible thing that people have valued for centuries. It is not a fleeting thing like a number in a server like an NFT or bitcoin. If I could buy a piece of the Beetles' "Penny Lane" I would be all over it.

  • @DJWESG1
    @DJWESG1 11 місяців тому

    My only question was how was that so many people came to a similar conclusion in 2011 specifically. Ive been trying to answer this since even i had the same idea at that time and couldnt stop writing about it.

  • @donwolff6463
    @donwolff6463 11 місяців тому

    Question Arvin: off topic of vid, but nagging at my brain. Dark matter, could this simply be a result of the difference we see in the structure of the universe itself? What I mean by that is, using the balloon inflating example (or considering the substance of the universe having fluidic properties perhaps), imagine putting rocks around its surface and, as the balloon expands, its expansion slows around areas not covered in rocks; and the depressions those rocks curve space around them, thus giving them more capacity to spin. Perhaps we are not accounting for how just how much spacetime is warped by mass? Could this concept be a viable possibility for what we label as dark matter? Thank you dear sir! 👍😁👍

  • @DimensionalGaming4
    @DimensionalGaming4 10 місяців тому

    So you're telling me when I think its a complex function that is neural networks processing inputs and determining an output?

  • @estorvator
    @estorvator 4 місяці тому

    Awesome

  • @JacobP81
    @JacobP81 9 місяців тому

    0:27 I don't know how it works, I've been wondering a lot how it does, that's why I'm watching this.

  • @PowerScissor
    @PowerScissor 11 місяців тому

    Fellow Arvin Ash viewers, I need some help and UA-cam search is failing me.
    I'm trying to find a video from this channel about matter phase shifts to send someone and can't remember the title. Does anyone remember which video that discusses the solid, liquid, gas, plasma phase changes?

  • @chaomingli6428
    @chaomingli6428 Рік тому +3

    Our technology cannot understand what is consciousness, therefore even if AI has consciousness, we might not know.

    • @alhypo
      @alhypo Рік тому +1

      You don't have to understand what consciousness is in order to recognize it. We don't understand gravity but we have no trouble recognizing it.

    • @altrag
      @altrag 11 місяців тому

      @@alhypo > You don't have to understand what consciousness is in order to recognize it
      Are you sure? Can we know whether a dog is conscious? An ant? A tree? A slime mold?
      All of those things I've listed have been suggested as potentially having some form of consciousness (like, serious suggestions based on science - not necessarily widely accepted, but I'm not talking about some tree hugger making these claims during an acid trip here).
      And perhaps more prophetic when it comes to AI, there has been suggestion that the internet could be considered "conscious" by some definitions. That's probably even less accepted than the slime mold idea, but its hard to say its entirely _wrong,_ as we don't have a clear definition of what is "right" when it comes to assigning the label of "conscious" to things that perform seemingly-intelligent tasks while not having anything really akin to a human brain.

    • @alhypo
      @alhypo 11 місяців тому

      @@altrag Yes, dogs are conscious. Do you really doubt that or are you just being contrary? Ants are certainly debatable. First off, there are thousands of different ant species so you have to be careful about being overly general. But ants for sure exhibit a collective or emergent consciousnesses. Trees can respond to their environment but they don't have any traits we would consider consciousness. Slime molds... they are like ants in a way. They have a collective consciousness of sorts.
      You can certainly have a philosophical debate on how to define consciousness. But we would still know whether or not a particular thing is conscious by fitting it to whatever definition you come up with.
      But you know what definitely does NOT have consciousnesses? AI doesn't. No matter how baffling and amazing it seems, it is not conscious by any reasonable definition.
      We need to stop mythologizing AI as so many seem to be doing lately. The problem is that, when we do so, we are wasting energy worrying about the wrong thing. AI does pose a danger to us. But not because AI is malicious. It is a danger to us because we are a danger to ourselves and AI is simply a tool that reflects that. So just stop all this tedious, metaphysical nonsense about AI maybe be conscious or not. Save it for when we have actual AI. All we have now is a natural language model which we've had for years. The newer ones are just especially good.

  • @jasonwhiskey6083
    @jasonwhiskey6083 10 місяців тому

    If I understand correctly, the AI overtime finds a value or variable that will continually produce the correct answer. We can zoom in on that/those values but can't see the history of how it calculated it. Interesting. I do very basic macro programming on CNC machines. So no background on this at all. Just trying to relate it.

  • @AutisticThinker
    @AutisticThinker Рік тому +1

    Oh I wish I could be as optimistic.... "CBS Mornings" - "Autonomous F-16 fighter jets being tested by the U.S. military"

  • @Reyajh
    @Reyajh Рік тому +2

    I think what Musk and some of those others are saying is we should slow down and start discussing the possibilities here, now and hat we might / can / should do about it... Not going around saying let's put our heads in the sand, we don't need to worry.

  • @Dxeus
    @Dxeus Рік тому +4

    One very important thing in training AI models is "Back Propagation," which is an algorithm to give feedback to the input nodes about the error rate and the input nodes can correct it self in the next iteration.

    • @Age_of_Apocalypse
      @Age_of_Apocalypse Рік тому +2

      "One very important thing in training AI models is "Back Propagation,"
      It's the MOST important thing! 🙏

    • @benwhite2226
      @benwhite2226 Рік тому

      how does it handle the corrections? does it just make a small change and test again and keep it if its better? I cant imagine any method that can make changes based on a specific error.

    • @Age_of_Apocalypse
      @Age_of_Apocalypse Рік тому

      @@benwhite2226 The whole neural network can be seen as a big function where the backpropagation algorithm seaches for a mimima that will minimize the error on the outputs. 🤞
      Behind that algorithm, there is mathematics - not complicated - that explain how to adjust by making - repeatedly - very small changes to the weights to find a (very) minima for the neural network. Hundreds and hundreds, even thousands of changes to the weights are necessary to find a good solution.

    • @benwhite2226
      @benwhite2226 Рік тому +1

      @@Age_of_Apocalypse Yeah I'm familiar with the basic math forming neural networks, I have been studying the math on my own as a continuation of a ML course I was in. I'm curious about how backpropagation actually changes the weights and what it can minimize mathematically. What process can take this test error rate and then apply it back into the model based on training data? Id be happy to take a look at any source talking about the math done in that process if you have any recommendations, I'm currently greatly enjoying learning about machine learning methods.

    • @altrag
      @altrag 11 місяців тому

      @@benwhite2226 Back propagation essentially performs an iteration of Newton's method to improve the approximation of the (enormous-dimension) function that the network represents.
      Its not quite that simple as the inputs change for every iteration and its also having to somewhat approximate the derivative based on the feedback it gets after a guess, but that's the fundamental principle of it.

  • @amongandwithin3820
    @amongandwithin3820 11 місяців тому

    I have a question: If the solar system came from a interstellar dust cloud, where is the white dwarf, neutron star or black hole originated from such previous supernova? How a dust cloud can be formed without one?

  • @anthologyapchallengeyingya8881
    @anthologyapchallengeyingya8881 10 місяців тому +1

    Thanks 👍😊 stop in found you it AI 😮

  • @user-fr9id8qv9e
    @user-fr9id8qv9e 11 місяців тому

    نظرية
    الاوتار الفائقة
    11
    10
    صحيحة
    ولاكنها مطربة
    و
    26
    صحيحه
    وهذة ما تسعى اليه التنظمة
    ولذالك جدول الكل جزء منه
    يبين عدد من الابعاد التي يجب تطهيرها
    قبل فوات الاوان

  • @znariznotsj6533
    @znariznotsj6533 9 місяців тому

    Excellent video, as always. I think your conclusion is right. AI is as dangerous as any other major technology advancement.

  • @sergeynovikov9424
    @sergeynovikov9424 Рік тому

    btw, perhaps most have not yet realized that AGI is already here and progressing rapidly!)
    AGI = AI + the mind of humanity united via the Internet technologies. progress is being made in both terms, but it is especially noticeable at present in AI.
    as for the autonomous AGI, the question of the possibility of its creation on an artificial carrier is still open. at least it will be made not in the nearest future if possible in theory.

    • @agdevoq
      @agdevoq Рік тому +2

      Sorry, I see some confusion here. AGI is not AI + a bunch of plugins to access the web.
      AGI is "generalized" intelligence, i.e. an human-like intelligence that is not confined to a single specific task (create an image, create a dialogue, etc), which may even become self-conscious.

    • @sergeynovikov9424
      @sergeynovikov9424 Рік тому

      @@agdevoq the idea of AGI on an artificial substrate is misleading because of the poor understanding of the phenomenon of consciousness (and life) in the universe. biological life is a planetary scale phenomenon which produced higly developed conscious creatures in the result of the long evolution of the planetary life. so, consciousness is also a planetary scale phenomenon, which can be also represented as a kind of a complex neural network. AI + artificial Internet technologies enhance cognitive abilities of this system of a size of the planet to process information. this is how life evolution works. artificial carrier for a local AGI is not necessary. at least at the beginning when the enhanced General Intelligence appears in the system.

    • @agdevoq
      @agdevoq Рік тому +2

      @@sergeynovikov9424 so much jargon...
      Long story short: I don't agree on your definition of AGI. You're describing a concept and you're calling it AGI, but the rest of the world doesn't.

    • @sergeynovikov9424
      @sergeynovikov9424 Рік тому

      @@agdevoq most of the world has no idea of what is life and consciousness in the universe. that's why many have a mess in their minds and wrong ideas about what they can really do and how it will work, while the rest may be totally impossible))

  • @xyzxyzxyzxyz636
    @xyzxyzxyzxyz636 Рік тому

    That's so true!

  • @gwentchamp8720
    @gwentchamp8720 Рік тому +1

    One job AI can't replace is Arvin himself 😂

    • @ArvinAsh
      @ArvinAsh  11 місяців тому +1

      I wouldn't be so sure. I think I can be replaced.

  • @jamescarr229
    @jamescarr229 Рік тому +2

    I appreciate that biological neurons are more complex, but why are they not just more complex maths? Even if factoring in additional inputs (such as EEG waves maybe changing biases/thresholds/weightings in real time) what makes it more than just complicated maths? AI language models keep telling me the difference is that they're unable to have subjective experiences like humans do - and yet the subjective experiences we're having are occurring after the brain activity, using more brain activity - how is that different to an AI system that could be allowed to assess it's own answers and adjust weightings (redefining its own reality on the fly...like we do)?

    • @ArvinAsh
      @ArvinAsh  11 місяців тому +1

      You know, the idea that at the core of life, it all comes down to math...is an idea that some physicists have embraced.

  • @georgemancuso9597
    @georgemancuso9597 Рік тому

    Can the network train itself after a certain stage of development?

  • @OBGynKenobi
    @OBGynKenobi Рік тому +3

    There's a video out there of a talk by a big time AI scientist. He says in 2019 the AI had a 4 year olds ability to reason, in 2021 that had gone up to 9 years old. But here's the kicker, they had no idea how this came to be.

  • @Butcherbg
    @Butcherbg 11 місяців тому

    Ahahhaha... The way it looks to me is @ the end of the network computations totals "The Sum of All Fears"...

  • @ParagPandit
    @ParagPandit 8 місяців тому

    Your assurance on AI has put all my worries to rest. 😃

  • @jelliebird37
    @jelliebird37 10 місяців тому

    I think the thing that differentiates AI as a mortal threat lies in it’s potential for self replication AND fine tuning autocorrection. It could clone itself over and over, and make itself smarter - more immune to objective mistakes - than humans. “Purpose” or “function” can be programmed. But *motivation* would seem to require a sophisticated sense of self as well as something incredibly esoteric: emotion. How can a machine, a robot, *want* to do anything at all. This is something that makes many forms of living beings , even - ranging from insects that live only a few days all the way up to fictional aliens, like Star Trek’s Vulcans - a puzzle to me.
    The real danger is the “bad people” who will inevitably use AI to do their bad things.
    I might also include greedy and manipulative titans if Industry in the “white collar bad people” category. Any dramatic improvement in technology that can perform work previously possible only by humans with dramatic increases in efficiency and productivity *should* be a boon to people in general. It should be a “rising tide that lifts *all* boats” - not one that drowns the smallest boats. Once again, the power of AI to do good or evil rests in our ability to channel it.

  • @DGCMWC
    @DGCMWC Рік тому

    I recently started reading about AI safety and I tend to agree with Arvin. People like Eliezer Yudkowsky are super smart and their logic is impeccable, but I disagree with some of the premises. We should be talking more about the present bad effects of AI instead of some possible future.

    • @onebronx
      @onebronx 11 місяців тому

      are you unable to talk both issues at once?

  • @rey82rey82
    @rey82rey82 11 місяців тому

    Inscrutable matrices of floating point numbers?

  • @reversatire7724
    @reversatire7724 9 місяців тому

    we are really at the beginning stages of ai. it’s like he’s looking at a harmless baby and saying there’s no way he’ll grow up to be the next hitler…