What are Neural Networks || How AIs think

Поділитися
Вставка
  • Опубліковано 26 вер 2024

КОМЕНТАРІ • 473

  • @jord5626
    @jord5626 6 років тому +869

    I came to learn, realised I'm not smart enough and stayed for the drawings.

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +12

      If you like AI applied to games you might want to give my channel a check. Cheers!

    • @musicalbrit3465
      @musicalbrit3465 6 років тому +50

      Daporan self advertising on someone else’s channel isn’t cool, mate

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +20

      I had no bad intentions, but I understand your view.

    • @DehimVerveen
      @DehimVerveen 6 років тому +4

      If you want to learn more about Machine Learning / AI You should give this playlist by Andrew NG a try. ua-cam.com/play/PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN.html It's really great. I've found these exercises go well with the material: github.com/everpeace/ml-class-assignments/tree/master/downloads

    • @310garage6
      @310garage6 5 років тому +2

      I not smart enough so I turned the sound off and looked at the pictures 😉

  • @sovereigncataclysm
    @sovereigncataclysm 4 роки тому +42

    6:25 smooth transition there

    • @toast_bath5937
      @toast_bath5937 3 роки тому +4

      So smooth I had to click your time stamp to realize there was a transition

  • @Dreamer66617
    @Dreamer66617 6 років тому +53

    By 2:29 seconds i fully understood the concept behind neural networks... I'm third year comp sci and never heard anybody explain this so perfectly. Thank you !! Very impressive !!!

  • @44kainne
    @44kainne 5 років тому +84

    Honestly, I would watch any programming course taught by you in this style.

  • @dittygoops
    @dittygoops 4 роки тому +68

    CB: I will just run through this, you get it
    Me: no I don’t

    • @monkeyrobotsinc.9875
      @monkeyrobotsinc.9875 3 роки тому

      Yeah he sux ASS

    • @Naokarma
      @Naokarma 3 роки тому

      @@monkeyrobotsinc.9875 That was not the point of the comment.

    • @Naokarma
      @Naokarma 3 роки тому

      He's just saying you understand 1+1.
      The input he drew on the bottom right is what he's using to compare to the images on the right. If they match up, it's a +1. If not it's a -1. Red lines= x1, blue lines= x-1.

  • @the.starman
    @the.starman 6 років тому +427

    This is Ben
    "Hello, I'm Ben..."
    "Hello Ben"
    "...And I'm an anonymous neuron"

    • @ziquaftynny9285
      @ziquaftynny9285 6 років тому +1

      an*

    • @someoneincognito6445
      @someoneincognito6445 6 років тому +8

      I want Ben to appear in biology books, he's a very pretty neuron.

    • @robertt9342
      @robertt9342 5 років тому +2

      Isn't the neuron's name Ben, how is he anonymous?

    • @BillAnt
      @BillAnt 5 років тому

      The sound's too low in the fist part, it's making me neurotic... lol

    • @warmflatsprite
      @warmflatsprite 4 роки тому

      Hello.

  • @camelloy
    @camelloy 5 років тому +389

    me, a biologist, hearing him explain biology...yeah thats about right

  • @Chris_Cross
    @Chris_Cross 4 роки тому +557

    Neuroscientists are just brains trying to figure themselves out...

    • @AndyOpaleckych
      @AndyOpaleckych 4 роки тому +12

      Holy shet. This is too real for me :D

    • @maximumg99
      @maximumg99 4 роки тому +8

      Dats Deap

    • @notphoenixx108
      @notphoenixx108 4 роки тому

      Esphaav trouth lol

    • @dootanator_
      @dootanator_ 4 роки тому

      Christopher Dibbs if you are being a dumb ass don’t worry you are just a meet bag with electricity going through it it is going to happen

    • @sirpickle2347
      @sirpickle2347 4 роки тому

      Christopher Dibbs AAAAAAAAAAAAAAA

  • @youtubeuniversity3638
    @youtubeuniversity3638 6 років тому +420

    For some reason I want "a bullet of code" to be a code term.

    • @georgerebreev
      @georgerebreev 5 років тому +29

      It is a bullet of code is just semen shooting out of a shaft

    • @angelmurchison1731
      @angelmurchison1731 5 років тому +16

      WHIT3B0OY thanks, I hate it

    • @pranavbadrinathan6693
      @pranavbadrinathan6693 5 років тому +16

      @@georgerebreev Outstanding Move

    • @thefreeze6023
      @thefreeze6023 5 років тому +6

      Maybe for read streams and write streams, what you send int a stream can be called a bullet of code, since you *sorta* shoot it

    • @theterribleanimator1793
      @theterribleanimator1793 4 роки тому +8

      @@thefreeze6023 a bullet of code is the "scientific" term of having your code break so spectacularly that you just snap, grab a gun and end it.

  • @SiddheshNan
    @SiddheshNan 6 років тому +140

    brain.exe has stopped working

    • @thelknetwork1883
      @thelknetwork1883 4 роки тому +1

      Xaracen it can... under the right circumstances

  • @TestTest-zt1lx
    @TestTest-zt1lx 4 роки тому +3

    This is the most helpful video I have seen. The other videos don’t really get into detail of how they work.

  • @colinbalfour1834
    @colinbalfour1834 3 роки тому +7

    "So red is positive and blue is negative"
    *My life is a lie*

  • @Thatchxl
    @Thatchxl 6 років тому +1

    I know this video has far less views than some of your other videos, but I'm loving it. Please keep up this tutorial style video and don't be discoruaged. I really appreciate it!

  • @oddnap8288
    @oddnap8288 6 років тому +7

    These videos are great! Do you plan to do a ANN implementation/coding example, like before? I personally would find that really valuable. Also, any suggestions on practical Neural Network learning resources?

  • @MKBergins
    @MKBergins 2 роки тому +1

    I love your videos, and truly enjoy watching them.
    I appreciate the time & effort you put into making them, and would love to see more videos like this where you teach others your vast knowledge & skills
    I’m barely able to make a video a month, so I totally understand the slog.
    Just thought I’d let you know that I think you’re doing an awesome job. I’ve been a teacher for over a decade, and just want to extend a helping hand if you ever need help in teaching/making educational videos.

  • @trashcan8447
    @trashcan8447 6 років тому +101

    The only thing I heard is "mutatedembabies"

    • @NStripleseven
      @NStripleseven 3 роки тому

      And that’s all you need to know...

  • @bbenny9033
    @bbenny9033 6 років тому +96

    wtf your subs doubled since like 3 days ago nice mate

    • @jorian8834
      @jorian8834 6 років тому +3

      benny boy whaha yea I am one of them, watched one video. Then the offline Google thingy one popped up in recommendations. And then I subscribed ^^ interesting stuff.

    • @bbenny9033
      @bbenny9033 6 років тому

      ye its good. nice :3

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +1

      I think you'll like my channel then. I've got AI demo's and will be doing more tutorials soon. Let me how you liked it, cheers!

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому

      If you like this channel, then you might want to give my channel a check. It's focused around AI. More content and tutorials are coming.

    • @h4724-q6j
      @h4724-q6j 6 років тому +1

      Daporan I'll check it out. I don't normally like advertising on other videos, but you've been nice enough.

  • @Tyros1192
    @Tyros1192 2 роки тому +1

    Funnily enough, in class I am learning how neural networking works, and this video has been quite useful on helping me understand it better.

  • @APMathNerd
    @APMathNerd 5 років тому +6

    I love this, and I'd love to see a video on how to actually combine NNs and the genetic algorithm! Keep up the awesome work :D

  • @marius.1337
    @marius.1337 6 років тому +1

    I would like the video connecting neural networks to genetical ones aswell as a code video. Great stuff man.

  • @micahgilbertcubing5911
    @micahgilbertcubing5911 6 років тому +4

    Cool! For my CS final project this year I'm doing a basic neural network for simple games (snake, pong, breakout)

    • @Anthony-kq8im
      @Anthony-kq8im 6 років тому +1

      Good luck!

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +2

      Good luck bro, it's a lot of fun. Check my channel if you need some inspiration for fun games to do AI on.

  • @Oxmond
    @Oxmond 4 роки тому +2

    Great stuff! Thanks! 👍🤓

  • @thalesfernandes4263
    @thalesfernandes4263 6 років тому +2

    Hi, I'm trying to implement NEAT in java too, but I'm having problems with speciation, my species is dying very fast, and my algorithm could not solve a simple XOR problem, if you make a video explaining some details about NEAT would be pretty cool, and maybe I can find what I'm doing wrong in my code, I've been able to do several projects using FFNN, but NEAT seems to be much better at finding solutions, especially when you do not know how many layers or neurons you need to complete the task.
    (I'm Brazilian and I'm going to start the computer sciences course soon, your videos are very good, keep bringing more quality content to youtube and sorry for any spelling mistakes)

  • @bill.i.am1_293
    @bill.i.am1_293 5 років тому +5

    Hey CodeBullet. I’m an upcoming senior in HS and over the past year I’ve found a passion for coding. I’ve been trying to get into ai and ML for the past few months but with no luck. Could you go more into depth with this specific neural network?

  • @spencerj
    @spencerj 4 роки тому +1

    I would greatly appreciate the followup video you mentioned about the connection of genetic weight evolution with neural networks

  • @illusion9423
    @illusion9423 4 роки тому +7

    I'm having an AI test in 6 hours
    thank you Code Bullet

  • @maxhaibara8828
    @maxhaibara8828 6 років тому +27

    Hi Code Bullet, I have a question about the activation function.
    There are a lot of activation function, however my teacher said that the best one is Sigmoid (or tanh). But why? And is it really the best, just because it's continuous function? if it is, then can we design our own activation function and actually work well? I know that in CNN they use ReLU instead of Sigmoid. Then what happen if we use Sigmoid on CNN, or even our own activation function?
    My teacher never answer question seriously, and they just said that it works better when you actually try it. But it still doesn't answer WHY it is the best. It might be better compared to the non-continuous step function, but is it better than all of the activation function? And also, why in my book there's only Sigmoid (or tanh) that is continuous as an activation function!
    I think this topic will be an interesting tutorial video. Thank you.

    • @KPkiller1671
      @KPkiller1671 6 років тому +8

      The reason we use activation functions is to introduce non-linearity into the nn model. Otherwise we can achieve the same thing with a single matrix multiplication. With more layers and non-linear activation functions, the model becomes a non-linear function aproximater.
      The reason we like to use continuous functions like Sigmoid, Tanh and Relu is because they are easily differentiatable. There is a supervised learning technique called gradient-decent through backpropagation which is used in many tasks instead of Genetic Algorithms. However gradient-decent requires computing the gradient of the weights with respect to the "cost" function (fitness function if you want to think of it that way) of the network. This is a massive chain rule problem and since the step-function in this case has a gradient of 0 as it is not continuous, all of the calculations become 0 making it impossible to use backpropagation.
      I advise you look up backpropagation and how it works. 3Blue1Brown has an awesome video about it:
      ua-cam.com/video/IHZwWFHWa-w/v-deo.html
      Also, Sigmoid is deemed weaker than Tanh and Relu these days. Lots of Tanh and Relu models are dominating with Relu coming out on top. (ofc it really depends on the model's context)

    • @maxhaibara8828
      @maxhaibara8828 6 років тому

      KPkiller1671 but still if we're just talking about differentiable, there might be other activation function that is harder to differentiate but works better. Easy doesn't mean the best.
      About the polar value that is mentioned by NameName, I never heard about that. I think I'll look for it.

    • @maxhaibara8828
      @maxhaibara8828 6 років тому

      NameName ah ic

    • @banhai2
      @banhai2 6 років тому +3

      I'm not sure if sigmoid/tanh is the best way to go all of the time, ReLU for example reduces gradient vanishing and enforces sparsity (if below 0, then activation = 0, which translates into less activations, which tends to be good for reducing over-fitting).
      The why one is better or worse than the other in different cases can hardly ever be found analytically, though.

    • @KPkiller1671
      @KPkiller1671 6 років тому

      Max Haibara you will find in machine learning, if something is faster to differentiate your model can be learned faster. Also relu has a neat advantage over all other activation functions.
      You can construct residual blocks for deeper networks. If the network has too many layers, an identity layer would be needed somewhere. Relu allows the network to do this easily by just seeing the weight matrix of that layer to 0s.

  • @austinbritton1029
    @austinbritton1029 3 роки тому +1

    Came here for the knowledge, subbed for the humor

  • @MisterL2_yt
    @MisterL2_yt 5 років тому +9

    10:26 wait... no way that you freehand drew that

  • @cryptophoenix6541
    @cryptophoenix6541 6 років тому

    Congrats on 100K this channel is really growing fast!

  • @elijahtommy7772
    @elijahtommy7772 5 років тому +1

    So, if you have a neuron that you want to require 3 positive connections for it to activate, then you would make the bias -2 right?

  • @venusdandan4347
    @venusdandan4347 5 років тому

    I looked away for like 2 seconds and suddenly I didn't understand and had to rewind. I love the drawings

  • @KPkiller1671
    @KPkiller1671 6 років тому +4

    I really think you should ammend your title. I believe as it stands, a lot of new comers to neurual nets are going to think that Genetic Algorithms are the be all and end all of training a neural network. I got caught up in this mess myself before discovering the world of gradient decent (and other optimization techniques) and backpropagation. Of course supervised learning techniques contain a lot more maths and are a fair amount more complex, but I don't think people should be told that this is difinitively how all neural networks work.

  • @_aullik
    @_aullik 6 років тому +37

    Can you please upload your Dino code to your github

  • @ExtraTurtle
    @ExtraTurtle 3 роки тому +3

    Hey, Just a random question.
    Why did you need to check 4 times on the second level?
    Couldn't the ones that check for black just return a negative to the third layer instead of having the two yellow ones?
    the 2nd row and 4th row in the 3rd layer can just connect to both of the neurons in the 3rd layer, but with a blue one to the ones they're not connected to currently
    will that not work?
    Edit: I realize that it won't work with the current numbers because if you have 3 blacks and 1 white for example, it will have +2 and -1 and still be positive, but what if we make the negative one much bigger? for example, positive adds 1 and negative just makes it negative. so if there's at least one negative, send 0 no matter what, if all positive, send 1.
    Also, why does the second level send 0? if it send -1 instead, wouldn't the bias just be redundant?
    Is this a valid neural network? prnt.sc/10b6pbo
    Did I make a mistake here?
    is it not allowed to have blue ones on the second layer?
    is it not allowed to have numbers other than -1

  • @professordragon
    @professordragon 2 роки тому +1

    You should definitely do a coded example of this, even if it's 4 years late...

  • @zan1971
    @zan1971 3 роки тому

    Not studying computer science or anything but this was very interesting! You say stuff like weights and network all the time in your videos so this helps explain how.
    Gist is layer 1 is input of what you want and is assigned a value. Layer 2 is calculating the layer 1+ neural connections + B. Layer 3 is calculating layer 2 + neural connections + B. These calculations always lead to the correct output because it checks to see if value is more than/ equal or less than/equal to 1. So what are the numbers that will always give you the correct output? That is what the AI is going to decide on after lots of trial and error I'm assuming and it probably always starts with a random value. The neural connections are weights which also the AI decides. So AI evolving is just them guessing the right numbers. Pretty simple.

  • @SaplinGuy
    @SaplinGuy 5 років тому +4

    6:30 and onwards reminded me so much of tecmath's channel... Like holy shit xD

  • @davisdiercks
    @davisdiercks 6 років тому

    Nice explanation! In future videos it might be a good idea to invest more time in volume balancing though 😂 that one talking section in the middle and the outro music I just got absolutely blasted lol

  • @chthonicone7389
    @chthonicone7389 6 років тому +2

    Code Bullet, Ben's mother cares about his soma!
    Seriously though, I was thinking, and I think there is a reason why all of your asteroids playing AIs devolve into spinning and moving. The problem that causes this is your input mechanism. It is too simple.
    Stop me if I'm wrong, but you explained your input mechanism as having the following inputs: Whether or not there is an asteroid (possibly distance) in each of 8 directions around the ship, ship facing, and ship speed. I do not remember if you give it ship position at all, but that is irrelevant in my opinion.
    The problem with this setup comes from the fact that to the AI, asteroids seem to vanish from moment to moment as they pass between the directions if they can fit between 2 rays comfortably. As such, the AI really isn't tracking asteroids from moment to moment.
    My solution is, what if you fed the AI distance, direction, and heading of each of the closest 8 asteroids in order from farthest to closest. This will allow the AI to have some object persistence, as well as actual tracking for the objects. Likely the AIs will be able to develop more complex strategies as a result.
    The overhead of such an approach is that you have about 3x the number of inputs, and while it's a linear increase in the number of inputs, it may result in an exponential increase in the number of neurons. However, a good GPU will likely be able to handle this.
    I would be interested in how this affects the AIs you bred, and whether or not they develop more intelligent techniques with the information they would be given.

  • @joridvisser6725
    @joridvisser6725 2 роки тому

    Very interesting and I'm still waiting for part 2...

  • @גרמניתבעברית
    @גרמניתבעברית 6 років тому

    I really enjoy learning from your videos

  • @christianlira1259
    @christianlira1259 5 років тому

    Great NN video and thank you CB!

  • @shauryapatel8372
    @shauryapatel8372 4 роки тому

    Thank you Code Bullet, I am A 10 Year old PCAP and i am trying to learn AI or more specificly DL everywhere I search i don't understand anything but I understood when you told me, and again thankyou

  • @filyb
    @filyb 6 років тому

    yeees Im so looking forward to your next video! Pls keep it up!

  • @tctrainconstruct2592
    @tctrainconstruct2592 5 років тому

    A neuron doesn't sum up the inputs then uses the activation function, but to the inputs it adds a "bias".
    Output = H(b+Si)
    where H is the activation function, b the bias and Si the sum of hte inputs.

  • @Amir-tv4nn
    @Amir-tv4nn 5 років тому

    Fantastic man. Your videos are great...

  • @dylanshaffer2174
    @dylanshaffer2174 4 роки тому

    "...And I will probably make a video about combining neural networks and the genetic algorithm sometime in the future"
    ...
    Wish that happened, I miss these educational tutorial videos. The new ones are fun though, love your work!

  • @jercamdu78
    @jercamdu78 3 роки тому

    Hey, would be great to have this final tutorial example thing combining neural network and genetic algorithms ^^

  • @nigaraliyeva7607
    @nigaraliyeva7607 3 роки тому

    Wow, very great and simple video!

  • @indydiependaele2345
    @indydiependaele2345 4 роки тому

    I am seeing neural networks rn in Python classes in college, this was very helpfull

  • @TomasTomi30
    @TomasTomi30 5 років тому +1

    3blue 1 brown also made a great video about neural networks definitely worth seeing

  • @taranciucgabrielradu
    @taranciucgabrielradu 2 роки тому

    Funny how I knew literally every single thing in more detail because well... I have a computer science Masters degree. But I still stayed for the entertainment you provide because yes baby

  • @BlueNEXUSGaming
    @BlueNEXUSGaming 4 роки тому

    @Code Bullet
    You could use an Info Card to take people to the Video/Channel you mentioned at 5:00

  • @Skjoldmc
    @Skjoldmc 6 років тому

    Wow, you explained it so I could understand it. Great job!

  • @micahgilbertcubing5911
    @micahgilbertcubing5911 6 років тому +1

    Damn this channel exploded recently!

  • @楊學翰-m8m
    @楊學翰-m8m 6 років тому +3

    "Ah man this is confusing"😂

  • @lostbutfreesoul
    @lostbutfreesoul 6 років тому +1

    I still can't drop this idea of training two networks in a pred-pray format, and letting them go at it for a while....

    • @liam6550
      @liam6550 5 років тому +1

      and see what tactics each ai comes up with

  • @ADogToy
    @ADogToy 4 роки тому

    I've gotten so many super long ads for programming courses. CodeB's getting serious audience targeted ads, I hope they're paying out well. Also def not skipping cuz they hit the nail on the head on this one xd

  • @NewbGamingNetworks
    @NewbGamingNetworks 6 років тому

    Thanks for the video, bullet!

  • @MinecraftingMahem
    @MinecraftingMahem 6 років тому

    Please do the video combining genetic algorithm and neural networks. This is great!

  • @thomaserkes2676
    @thomaserkes2676 6 років тому

    I’m watching this and revising science at the same time, cheers mate

  • @KernelLeak
    @KernelLeak 6 років тому +6

    6:30 / 11:55 RIP headphones... :(
    Maybe run your audio clips through a filter like ReplayGain that tries to make sure the audio has about the same overall volume before editing all that stuff together?

  • @wolfbeats9993
    @wolfbeats9993 2 роки тому +1

    Can someone explain where at 6:46 he got the 3rd -1 even though there are only two boxes? I understand how he got -1 it’s just confusing me why there’s three

  • @njupasupa1948
    @njupasupa1948 5 років тому

    Thank you Ben i got a four in biology class yesterday.

  • @ther701
    @ther701 5 років тому +1

    0:50 Reminds me i have yet to learn Nervous System for exams

  • @TroubleMakery
    @TroubleMakery 6 років тому

    Where’s the next part of this series dude? I. Need. It. I need it!

  • @nikolachristov6497
    @nikolachristov6497 5 років тому +1

    Sorry if I sound really dumb for saying this but wasn't the bias supposed to always give out a 1? If so why does every equation end with -1 when the bias is factored in?

  • @bencematrai7355
    @bencematrai7355 6 років тому

    Thanks! You are really inspiring :D

  • @tnnrhpwd
    @tnnrhpwd 2 роки тому

    For anyone confused with the math at 8:00, he did not include his first step of creating the numbers in the far left column. The far left numbers change based on what input is used. I suppose this could be assumed, but it took me a second to realize it.

  • @aa01blue38
    @aa01blue38 6 років тому

    With the checkerboard pattern, you can do the exact same thing with 1 XOR gate, 2 XNOR gates and 2 AND gates

  • @SammyDoesAThingYT
    @SammyDoesAThingYT 6 років тому +2

    Question: When mutating the candidates in a population, how many mutations do you give to each candidate?

    • @KPkiller1671
      @KPkiller1671 6 років тому +1

      You usually don't dish out an amount of mutations. Quite often you will have a mutation rate of about 1%, however it is not set in stone. Another popular aproach is to mutate at a rate of 1/successRate. Hope this helps :)

  • @ilayws4448
    @ilayws4448 5 років тому

    Amazing as always!

  • @ev3rything533
    @ev3rything533 3 роки тому

    So how exactly are you using evolution combined with the neural networks? btw this was a great explanation video on explaining the neural networks. I understood the basic concept, but didn't understand how the weights correlated to actual math.

  • @deepslates
    @deepslates 4 роки тому +6

    I didn’t understand the “oversimplified” explanation.
    Imagine neurosciencists

  • @nesurame
    @nesurame 5 років тому

    This video taught me more about neurons than I learned I school

  • @Appl3Muncher
    @Appl3Muncher 5 років тому

    Very informative thanks for the video

  • @NStripleseven
    @NStripleseven 3 роки тому

    Came for no reason, stayed because big funny and smart

  • @pace6249
    @pace6249 6 років тому

    love ur vids man more plz

  • @NHSynthonicOrchestra
    @NHSynthonicOrchestra 6 років тому

    Here’s an idea, what about putting an AI against a Rhythm game like guitar hero/clone hero or any Rhythm game like necrodancer. Could you possibly make an AI to complete a game?

  • @eritra4303
    @eritra4303 3 роки тому +1

    Did anybody unserstand the bias? I wonder why it is -1 in this example, when he told us the bias Neuron is 1, because even the weight before is 1 so i see no reason why the bias is -1 other than it just works.

  • @BExploit
    @BExploit 6 років тому

    A coded example would be nice. I like your videos

  • @gauravrewaliya3269
    @gauravrewaliya3269 4 роки тому +1

    You make many video on it
    But still not release how can we practically experiment this

  • @dominiksmeda7203
    @dominiksmeda7203 6 років тому

    please teach me this amazing AI. I'm waiting for more. Great Job!

  • @nCUX1699
    @nCUX1699 6 років тому

    Even thou i didn't found anything usefull for me, it was a great video! Just try to get your audio little bit more equal through the video next time

  • @meatkirbo
    @meatkirbo 2 роки тому

    *sees thumbnail*
    "Oh let's see if I'm dumb"
    *watches video*

  • @battusurender4282
    @battusurender4282 6 років тому +6

    Hey I take inspiration from your videos and would like to do what you do. It looks fun, training games. Could you guide me in the right direction? Where I should start learning and what I should do?

    • @tobylorkin859
      @tobylorkin859 6 років тому

      Battu Surender if you are in Australia search for SAE courses

    • @rumfordc
      @rumfordc 6 років тому +3

      step 1: learn how to program
      step 2: learn how to program games
      step 3: learn how to program game AI
      step 4: learn how to program Neural Network AI

  • @bl4ckscor3
    @bl4ckscor3 6 років тому

    That perfect checkmark

  • @SyedAli-mc7on
    @SyedAli-mc7on 5 років тому

    3:35 Best part of the video

  • @gustavomartinez6892
    @gustavomartinez6892 6 років тому

    Great job!!!

  • @supremespark2454
    @supremespark2454 6 років тому

    This is pretty simple
    For those who are a bit lost think of it as a filter or a yes/no gates you have to pass through

  • @ohiasdxfcghbljokasdjhnfvaw4ehr
    @ohiasdxfcghbljokasdjhnfvaw4ehr 6 років тому

    Id like to more of this applied

  • @elcidbob
    @elcidbob 4 роки тому

    Seems like it would be more effective, albeit more complicated to implement how with neurons there's not two states, there's three. Neurons always fire. They have what's known as a resting rate which is just the rate it fires with no external influence. When inputs stimulate the neuron, it fires at a faster rate. When an input inhibits a neuron, it fires at a slower rate. When there's no input or the inhibitory input equals the stimulating input, there's no change.

  • @William_ar98
    @William_ar98 6 років тому

    Can you please do a video on how to acually code/create those neural networks? Thats the part im struggeling with.

  • @amitkeren7771
    @amitkeren7771 6 років тому

    Amazinh vid!!!plz more

  • @bawlsinyojaws8938
    @bawlsinyojaws8938 2 роки тому

    UA-cam keeps recommending this video 18 times even if I watched it

  • @QuasiGame0
    @QuasiGame0 4 роки тому

    So, in a way, aren't ai neural networks, more or less a large string of AND functions with counters that increase/decrease if the result was favourable/correct or not and then comparing the counters to determine in the AND function which should be chosen?

    • @Salvatorr42
      @Salvatorr42 4 роки тому +1

      For each neuron, rather than a question of "does the input have this feature", it's more of a question of "how well is this feature found in the input". So you might get values of 1.2 if it's detected, 0 if not, -2.6 if the opposite was found, etc. So it's not really a bunch of AND functions, but rather a linear combination of the inputs. eg if you have inputs x1, x2, x3, weights w1, w2, w3, and bias b, then you compute z = x1w1 + x2w2 + x3w3 + b. We then compute y = f(z) for some activation function f, like sigmoid. This activation function lets our network approximate non-linear functions.
      Also, the usual way that we train networks isn't through increasing/decreasing counters. Instead, we define a loss function which describes how badly the network performed on the given data. Then, we use calculus to figure out how our loss function changes with respect to the weights. This will tell us how to change our weights to decrease the loss, so we update the weights a tiny bit given that info. We keep repeating that until we're satisfied with the network.

    • @QuasiGame0
      @QuasiGame0 4 роки тому

      @@Salvatorr42 Thanks for the response! I'm trying to recreate by using littlebigplanet's logic
      Wasn't expecting on a year old video haha

  • @biteshtiwari
    @biteshtiwari 6 років тому

    Keep posting on AI .I want to learn AI coding and you have the natural teaching abilities.

  • @Т1000-м1и
    @Т1000-м1и 3 роки тому +1

    420 comments...... there's s saying "I understood the sense of life" which means I understood alot. So uhhh. I now understand alot more about neural networks. The multiple into one is genius and yeah cool. I watched this some time back then but I didn't understand some basis of neuroevolution stuff and didn't understand some things and yeah...... cool
    Edit: also I forgot to finish the 42.0 420 sense of life thing. It was meant to be at the end of the comment the conclusion and yeah anyways.

  • @yashaswikulshreshtha1588
    @yashaswikulshreshtha1588 3 роки тому +1

    Only difference between biological neuron and artificial neuron is one uses chemistry to process data and one uses pure maths