What are Neural Networks || How AIs think

Поділитися
Вставка
  • Опубліковано 16 чер 2024
  • Big thanks to Brilliant.org for supporting this channel check them out at www.brilliant.org/CodeBullet
    check out Brandon Rohrers video here: • How Deep Neural Networ...
    Become a patreon to support my future content as well as sneak peaks of whats to come.
    / codebullet
    Check out my Discord server
    / discord

КОМЕНТАРІ • 475

  • @jord5626
    @jord5626 6 років тому +868

    I came to learn, realised I'm not smart enough and stayed for the drawings.

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +12

      If you like AI applied to games you might want to give my channel a check. Cheers!

    • @musicalbrit3465
      @musicalbrit3465 6 років тому +50

      Daporan self advertising on someone else’s channel isn’t cool, mate

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +20

      I had no bad intentions, but I understand your view.

    • @DehimVerveen
      @DehimVerveen 5 років тому +4

      If you want to learn more about Machine Learning / AI You should give this playlist by Andrew NG a try. ua-cam.com/play/PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN.html It's really great. I've found these exercises go well with the material: github.com/everpeace/ml-class-assignments/tree/master/downloads

    • @310garage6
      @310garage6 5 років тому +2

      I not smart enough so I turned the sound off and looked at the pictures 😉

  • @camelloy
    @camelloy 5 років тому +385

    me, a biologist, hearing him explain biology...yeah thats about right

  • @Dreamer66617
    @Dreamer66617 5 років тому +53

    By 2:29 seconds i fully understood the concept behind neural networks... I'm third year comp sci and never heard anybody explain this so perfectly. Thank you !! Very impressive !!!

  • @sovereigncataclysm
    @sovereigncataclysm 4 роки тому +41

    6:25 smooth transition there

    • @toast_bath5937
      @toast_bath5937 3 роки тому +4

      So smooth I had to click your time stamp to realize there was a transition

  • @44kainne
    @44kainne 5 років тому +84

    Honestly, I would watch any programming course taught by you in this style.

  • @dittygoops
    @dittygoops 4 роки тому +65

    CB: I will just run through this, you get it
    Me: no I don’t

    • @monkeyrobotsinc.9875
      @monkeyrobotsinc.9875 3 роки тому

      Yeah he sux ASS

    • @Naokarma
      @Naokarma 3 роки тому

      @@monkeyrobotsinc.9875 That was not the point of the comment.

    • @Naokarma
      @Naokarma 3 роки тому

      He's just saying you understand 1+1.
      The input he drew on the bottom right is what he's using to compare to the images on the right. If they match up, it's a +1. If not it's a -1. Red lines= x1, blue lines= x-1.

  • @the.starman
    @the.starman 6 років тому +426

    This is Ben
    "Hello, I'm Ben..."
    "Hello Ben"
    "...And I'm an anonymous neuron"

    • @ziquaftynny9285
      @ziquaftynny9285 6 років тому +1

      an*

    • @someoneincognito6445
      @someoneincognito6445 5 років тому +8

      I want Ben to appear in biology books, he's a very pretty neuron.

    • @robertt9342
      @robertt9342 5 років тому +2

      Isn't the neuron's name Ben, how is he anonymous?

    • @BillAnt
      @BillAnt 5 років тому

      The sound's too low in the fist part, it's making me neurotic... lol

    • @warmflatsprite
      @warmflatsprite 4 роки тому

      Hello.

  • @Chris_Cross
    @Chris_Cross 4 роки тому +555

    Neuroscientists are just brains trying to figure themselves out...

    • @AndyOpaleckych
      @AndyOpaleckych 4 роки тому +12

      Holy shet. This is too real for me :D

    • @maximumg99
      @maximumg99 4 роки тому +8

      Dats Deap

    • @notphoenixx108
      @notphoenixx108 3 роки тому

      Esphaav trouth lol

    • @dootanator_
      @dootanator_ 3 роки тому

      Christopher Dibbs if you are being a dumb ass don’t worry you are just a meet bag with electricity going through it it is going to happen

    • @sirpickle2347
      @sirpickle2347 3 роки тому

      Christopher Dibbs AAAAAAAAAAAAAAA

  • @nathangg9018
    @nathangg9018 4 роки тому +246

    So if the brain is made up of 100 billion neurons, does that mean that if we had computers powerful enough to simulate evolution with creatures of 100 billion neurons, they could eventually become as intelligent as us?

    • @ardnerusa
      @ardnerusa 4 роки тому +90

      Real brain has much more variety in neiron contacts than just 0 and 1. There are ingibitors, circles, and more and more... So we not even close to create computers which could support it. Its much easy to brootforce rar with password than doing so

    • @otaku-chan4888
      @otaku-chan4888 4 роки тому +14

      And that is why people think that Artificial Intelligence might someday develop far enough to rival human beings' smarts.

    • @otaku-chan4888
      @otaku-chan4888 4 роки тому +33

      ​@@ardnerusa You're right, but on the other hand technology is always advancing, just as if you told people a hundred years ago that it's possible to send a human to that one ball in the sky that goes from crescent to gibbous and back again, or if you told them that a computer could be smaller than your hand and be used to hear someone's voice instead of being a gigantic machine that's as big as two rooms (and can only do basic arithmetic lol) they wouldn't believe you.
      Quantum computers can already create states that are _neither 0 nor 1_ which opens the possibility for neuron contacts and probabilities. It's already possible to mimic the action of inhibitors and circles, and perhaps in less than a century making a "are you human?" captcha will not even exist because there'll be no problem AI cannot solve.
      However if by intelligence you mean emotional intelligence...no one can say. Honestly it boggles my mind to think that someone someday can type code which enables a computer to feel emotions like genuine frustration or excitement which isn't pre-programmed. But it may just happen, who knows.

    • @scptime1188
      @scptime1188 3 роки тому +14

      This is exaclty why robots are so limited in their use. Not robots in general, but robots made to do a specific thing, are rubbish at anything else. Our brains, however, hold multiple connections that let us do many, many, many tasks. Combine that with personality, self awareness, intuition and creativity and you have a neutral network beyond anything we can make. At least, for now.

    • @shot-gi6mr
      @shot-gi6mr 2 роки тому +4

      @@otaku-chan4888 Since evolution created human emotions, it seems likely that one day, when we have neural networks as complicated as our brains, they could learn emotions through evolution algorithms. But I agree that it's probably impossible that a person would be able to manually type code that equates to emotional intelligence.

  • @youtubeuniversity3638
    @youtubeuniversity3638 6 років тому +420

    For some reason I want "a bullet of code" to be a code term.

    • @georgerebreev
      @georgerebreev 5 років тому +29

      It is a bullet of code is just semen shooting out of a shaft

    • @angelmurchison1731
      @angelmurchison1731 5 років тому +16

      WHIT3B0OY thanks, I hate it

    • @pranavbadrinathan6693
      @pranavbadrinathan6693 5 років тому +16

      @@georgerebreev Outstanding Move

    • @thefreeze6023
      @thefreeze6023 5 років тому +6

      Maybe for read streams and write streams, what you send int a stream can be called a bullet of code, since you *sorta* shoot it

    • @theterribleanimator1793
      @theterribleanimator1793 4 роки тому +8

      @@thefreeze6023 a bullet of code is the "scientific" term of having your code break so spectacularly that you just snap, grab a gun and end it.

  • @colinbalfour1834
    @colinbalfour1834 2 роки тому +7

    "So red is positive and blue is negative"
    *My life is a lie*

  • @SiddheshNan
    @SiddheshNan 6 років тому +138

    brain.exe has stopped working

    • @thelknetwork1883
      @thelknetwork1883 3 роки тому +1

      Xaracen it can... under the right circumstances

  • @TestTest-zt1lx
    @TestTest-zt1lx 4 роки тому +3

    This is the most helpful video I have seen. The other videos don’t really get into detail of how they work.

  • @Thatchxl
    @Thatchxl 5 років тому +1

    I know this video has far less views than some of your other videos, but I'm loving it. Please keep up this tutorial style video and don't be discoruaged. I really appreciate it!

  • @cryptophoenix6541
    @cryptophoenix6541 6 років тому

    Congrats on 100K this channel is really growing fast!

  • @MKBergins
    @MKBergins 2 роки тому +1

    I love your videos, and truly enjoy watching them.
    I appreciate the time & effort you put into making them, and would love to see more videos like this where you teach others your vast knowledge & skills
    I’m barely able to make a video a month, so I totally understand the slog.
    Just thought I’d let you know that I think you’re doing an awesome job. I’ve been a teacher for over a decade, and just want to extend a helping hand if you ever need help in teaching/making educational videos.

  • @marius.1337
    @marius.1337 6 років тому +1

    I would like the video connecting neural networks to genetical ones aswell as a code video. Great stuff man.

  • @trashcan8447
    @trashcan8447 6 років тому +101

    The only thing I heard is "mutatedembabies"

    • @NStripleseven
      @NStripleseven 3 роки тому

      And that’s all you need to know...

  • @filyb
    @filyb 6 років тому

    yeees Im so looking forward to your next video! Pls keep it up!

  • @oddnap8288
    @oddnap8288 6 років тому +7

    These videos are great! Do you plan to do a ANN implementation/coding example, like before? I personally would find that really valuable. Also, any suggestions on practical Neural Network learning resources?

  • @davisdiercks
    @davisdiercks 6 років тому

    Nice explanation! In future videos it might be a good idea to invest more time in volume balancing though 😂 that one talking section in the middle and the outro music I just got absolutely blasted lol

  • @Skjoldmc
    @Skjoldmc 6 років тому

    Wow, you explained it so I could understand it. Great job!

  • @christianlira1259
    @christianlira1259 5 років тому

    Great NN video and thank you CB!

  • @user-uq3ew3ce7o
    @user-uq3ew3ce7o 6 років тому

    I really enjoy learning from your videos

  • @venusdandan4347
    @venusdandan4347 4 роки тому

    I looked away for like 2 seconds and suddenly I didn't understand and had to rewind. I love the drawings

  • @Amir-tv4nn
    @Amir-tv4nn 4 роки тому

    Fantastic man. Your videos are great...

  • @spencerj
    @spencerj 4 роки тому +1

    I would greatly appreciate the followup video you mentioned about the connection of genetic weight evolution with neural networks

  • @Tyros1192
    @Tyros1192 2 роки тому +1

    Funnily enough, in class I am learning how neural networking works, and this video has been quite useful on helping me understand it better.

  • @ilayws4448
    @ilayws4448 5 років тому

    Amazing as always!

  • @NewbGamingNetworks
    @NewbGamingNetworks 6 років тому

    Thanks for the video, bullet!

  • @Oxmond
    @Oxmond 4 роки тому +2

    Great stuff! Thanks! 👍🤓

  • @bencematrai7355
    @bencematrai7355 6 років тому

    Thanks! You are really inspiring :D

  • @nigaraliyeva7607
    @nigaraliyeva7607 3 роки тому

    Wow, very great and simple video!

  • @illusion9423
    @illusion9423 4 роки тому +7

    I'm having an AI test in 6 hours
    thank you Code Bullet

  • @TroubleMakery
    @TroubleMakery 6 років тому

    Where’s the next part of this series dude? I. Need. It. I need it!

  • @pace6249
    @pace6249 6 років тому

    love ur vids man more plz

  • @MinecraftingMahem
    @MinecraftingMahem 5 років тому

    Please do the video combining genetic algorithm and neural networks. This is great!

  • @APMathNerd
    @APMathNerd 5 років тому +6

    I love this, and I'd love to see a video on how to actually combine NNs and the genetic algorithm! Keep up the awesome work :D

  • @joridvisser6725
    @joridvisser6725 2 роки тому

    Very interesting and I'm still waiting for part 2...

  • @dominiksmeda7203
    @dominiksmeda7203 6 років тому

    please teach me this amazing AI. I'm waiting for more. Great Job!

  • @micahgilbertcubing5911
    @micahgilbertcubing5911 6 років тому +4

    Cool! For my CS final project this year I'm doing a basic neural network for simple games (snake, pong, breakout)

    • @Anthony-kq8im
      @Anthony-kq8im 6 років тому +1

      Good luck!

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +2

      Good luck bro, it's a lot of fun. Check my channel if you need some inspiration for fun games to do AI on.

  • @bill.i.am1_293
    @bill.i.am1_293 5 років тому +5

    Hey CodeBullet. I’m an upcoming senior in HS and over the past year I’ve found a passion for coding. I’ve been trying to get into ai and ML for the past few months but with no luck. Could you go more into depth with this specific neural network?

  • @Appl3Muncher
    @Appl3Muncher 5 років тому

    Very informative thanks for the video

  • @indydiependaele2345
    @indydiependaele2345 4 роки тому

    I am seeing neural networks rn in Python classes in college, this was very helpfull

  • @jercamdu78
    @jercamdu78 3 роки тому

    Hey, would be great to have this final tutorial example thing combining neural network and genetic algorithms ^^

  • @HonkTheMusic
    @HonkTheMusic 4 роки тому

    This was surprising easy to follow

  • @micahgilbertcubing5911
    @micahgilbertcubing5911 6 років тому +1

    Damn this channel exploded recently!

  • @njupasupa1948
    @njupasupa1948 5 років тому

    Thank you Ben i got a four in biology class yesterday.

  • @gustavomartinez6892
    @gustavomartinez6892 5 років тому

    Great job!!!

  • @thalesfernandes4263
    @thalesfernandes4263 6 років тому +2

    Hi, I'm trying to implement NEAT in java too, but I'm having problems with speciation, my species is dying very fast, and my algorithm could not solve a simple XOR problem, if you make a video explaining some details about NEAT would be pretty cool, and maybe I can find what I'm doing wrong in my code, I've been able to do several projects using FFNN, but NEAT seems to be much better at finding solutions, especially when you do not know how many layers or neurons you need to complete the task.
    (I'm Brazilian and I'm going to start the computer sciences course soon, your videos are very good, keep bringing more quality content to youtube and sorry for any spelling mistakes)

  • @shauryapatel8372
    @shauryapatel8372 4 роки тому

    Thank you Code Bullet, I am A 10 Year old PCAP and i am trying to learn AI or more specificly DL everywhere I search i don't understand anything but I understood when you told me, and again thankyou

  • @Lank891
    @Lank891 6 років тому

    Waiting for more vids

  • @thomaserkes2676
    @thomaserkes2676 6 років тому

    I’m watching this and revising science at the same time, cheers mate

  • @BExploit
    @BExploit 6 років тому

    A coded example would be nice. I like your videos

  • @BlueNEXUSGaming
    @BlueNEXUSGaming 4 роки тому

    @Code Bullet
    You could use an Info Card to take people to the Video/Channel you mentioned at 5:00

  • @amitkeren7771
    @amitkeren7771 6 років тому

    Amazinh vid!!!plz more

  • @SaplinGuy
    @SaplinGuy 4 роки тому +4

    6:30 and onwards reminded me so much of tecmath's channel... Like holy shit xD

  • @dylanshaffer2174
    @dylanshaffer2174 4 роки тому

    "...And I will probably make a video about combining neural networks and the genetic algorithm sometime in the future"
    ...
    Wish that happened, I miss these educational tutorial videos. The new ones are fun though, love your work!

  • @MisterL2_yt
    @MisterL2_yt 5 років тому +9

    10:26 wait... no way that you freehand drew that

  • @zan1971
    @zan1971 3 роки тому

    Not studying computer science or anything but this was very interesting! You say stuff like weights and network all the time in your videos so this helps explain how.
    Gist is layer 1 is input of what you want and is assigned a value. Layer 2 is calculating the layer 1+ neural connections + B. Layer 3 is calculating layer 2 + neural connections + B. These calculations always lead to the correct output because it checks to see if value is more than/ equal or less than/equal to 1. So what are the numbers that will always give you the correct output? That is what the AI is going to decide on after lots of trial and error I'm assuming and it probably always starts with a random value. The neural connections are weights which also the AI decides. So AI evolving is just them guessing the right numbers. Pretty simple.

  • @tctrainconstruct2592
    @tctrainconstruct2592 4 роки тому

    A neuron doesn't sum up the inputs then uses the activation function, but to the inputs it adds a "bias".
    Output = H(b+Si)
    where H is the activation function, b the bias and Si the sum of hte inputs.

  • @ADogToy
    @ADogToy 3 роки тому

    I've gotten so many super long ads for programming courses. CodeB's getting serious audience targeted ads, I hope they're paying out well. Also def not skipping cuz they hit the nail on the head on this one xd

  • @aa01blue38
    @aa01blue38 5 років тому

    With the checkerboard pattern, you can do the exact same thing with 1 XOR gate, 2 XNOR gates and 2 AND gates

  • @bbenny9033
    @bbenny9033 6 років тому +96

    wtf your subs doubled since like 3 days ago nice mate

    • @jorian8834
      @jorian8834 6 років тому +3

      benny boy whaha yea I am one of them, watched one video. Then the offline Google thingy one popped up in recommendations. And then I subscribed ^^ interesting stuff.

    • @bbenny9033
      @bbenny9033 6 років тому

      ye its good. nice :3

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому +1

      I think you'll like my channel then. I've got AI demo's and will be doing more tutorials soon. Let me how you liked it, cheers!

    • @PandoraMakesGames
      @PandoraMakesGames 6 років тому

      If you like this channel, then you might want to give my channel a check. It's focused around AI. More content and tutorials are coming.

    • @henryambrose8607
      @henryambrose8607 6 років тому +1

      Daporan I'll check it out. I don't normally like advertising on other videos, but you've been nice enough.

  • @professordragon
    @professordragon Рік тому +1

    You should definitely do a coded example of this, even if it's 4 years late...

  • @KPkiller1671
    @KPkiller1671 6 років тому +4

    I really think you should ammend your title. I believe as it stands, a lot of new comers to neurual nets are going to think that Genetic Algorithms are the be all and end all of training a neural network. I got caught up in this mess myself before discovering the world of gradient decent (and other optimization techniques) and backpropagation. Of course supervised learning techniques contain a lot more maths and are a fair amount more complex, but I don't think people should be told that this is difinitively how all neural networks work.

  • @austinbritton1029
    @austinbritton1029 3 роки тому +1

    Came here for the knowledge, subbed for the humor

  • @nesurame
    @nesurame 5 років тому

    This video taught me more about neurons than I learned I school

  • @dragun0v402
    @dragun0v402 6 років тому +11

    Did you draw this yourself, well that's something.

  • @ev3rything533
    @ev3rything533 3 роки тому

    So how exactly are you using evolution combined with the neural networks? btw this was a great explanation video on explaining the neural networks. I understood the basic concept, but didn't understand how the weights correlated to actual math.

  • @lkoyvse
    @lkoyvse 6 років тому

    kewl character ya drew

  • @nCUX1699
    @nCUX1699 6 років тому

    Even thou i didn't found anything usefull for me, it was a great video! Just try to get your audio little bit more equal through the video next time

  • @ohiasdxfcghbljokasdjhnfvaw4ehr
    @ohiasdxfcghbljokasdjhnfvaw4ehr 5 років тому

    Id like to more of this applied

  • @ther701
    @ther701 5 років тому +1

    0:50 Reminds me i have yet to learn Nervous System for exams

  • @Makex_sweden
    @Makex_sweden 6 років тому

    Can you please do a video on how to acually code/create those neural networks? Thats the part im struggeling with.

  • @user-qv6fs8if7o
    @user-qv6fs8if7o 5 років тому +3

    "Ah man this is confusing"😂

  • @TomasTomi30
    @TomasTomi30 5 років тому +1

    3blue 1 brown also made a great video about neural networks definitely worth seeing

  • @NStripleseven
    @NStripleseven 3 роки тому

    Came for no reason, stayed because big funny and smart

  • @tnnrhpwd
    @tnnrhpwd 2 роки тому

    For anyone confused with the math at 8:00, he did not include his first step of creating the numbers in the far left column. The far left numbers change based on what input is used. I suppose this could be assumed, but it took me a second to realize it.

  • @ExtraTurtle
    @ExtraTurtle 3 роки тому +3

    Hey, Just a random question.
    Why did you need to check 4 times on the second level?
    Couldn't the ones that check for black just return a negative to the third layer instead of having the two yellow ones?
    the 2nd row and 4th row in the 3rd layer can just connect to both of the neurons in the 3rd layer, but with a blue one to the ones they're not connected to currently
    will that not work?
    Edit: I realize that it won't work with the current numbers because if you have 3 blacks and 1 white for example, it will have +2 and -1 and still be positive, but what if we make the negative one much bigger? for example, positive adds 1 and negative just makes it negative. so if there's at least one negative, send 0 no matter what, if all positive, send 1.
    Also, why does the second level send 0? if it send -1 instead, wouldn't the bias just be redundant?
    Is this a valid neural network? prnt.sc/10b6pbo
    Did I make a mistake here?
    is it not allowed to have blue ones on the second layer?
    is it not allowed to have numbers other than -1

  • @maxhaibara8828
    @maxhaibara8828 6 років тому +27

    Hi Code Bullet, I have a question about the activation function.
    There are a lot of activation function, however my teacher said that the best one is Sigmoid (or tanh). But why? And is it really the best, just because it's continuous function? if it is, then can we design our own activation function and actually work well? I know that in CNN they use ReLU instead of Sigmoid. Then what happen if we use Sigmoid on CNN, or even our own activation function?
    My teacher never answer question seriously, and they just said that it works better when you actually try it. But it still doesn't answer WHY it is the best. It might be better compared to the non-continuous step function, but is it better than all of the activation function? And also, why in my book there's only Sigmoid (or tanh) that is continuous as an activation function!
    I think this topic will be an interesting tutorial video. Thank you.

    • @KPkiller1671
      @KPkiller1671 6 років тому +8

      The reason we use activation functions is to introduce non-linearity into the nn model. Otherwise we can achieve the same thing with a single matrix multiplication. With more layers and non-linear activation functions, the model becomes a non-linear function aproximater.
      The reason we like to use continuous functions like Sigmoid, Tanh and Relu is because they are easily differentiatable. There is a supervised learning technique called gradient-decent through backpropagation which is used in many tasks instead of Genetic Algorithms. However gradient-decent requires computing the gradient of the weights with respect to the "cost" function (fitness function if you want to think of it that way) of the network. This is a massive chain rule problem and since the step-function in this case has a gradient of 0 as it is not continuous, all of the calculations become 0 making it impossible to use backpropagation.
      I advise you look up backpropagation and how it works. 3Blue1Brown has an awesome video about it:
      ua-cam.com/video/IHZwWFHWa-w/v-deo.html
      Also, Sigmoid is deemed weaker than Tanh and Relu these days. Lots of Tanh and Relu models are dominating with Relu coming out on top. (ofc it really depends on the model's context)

    • @maxhaibara8828
      @maxhaibara8828 6 років тому

      KPkiller1671 but still if we're just talking about differentiable, there might be other activation function that is harder to differentiate but works better. Easy doesn't mean the best.
      About the polar value that is mentioned by NameName, I never heard about that. I think I'll look for it.

    • @maxhaibara8828
      @maxhaibara8828 6 років тому

      NameName ah ic

    • @banhai2
      @banhai2 6 років тому +3

      I'm not sure if sigmoid/tanh is the best way to go all of the time, ReLU for example reduces gradient vanishing and enforces sparsity (if below 0, then activation = 0, which translates into less activations, which tends to be good for reducing over-fitting).
      The why one is better or worse than the other in different cases can hardly ever be found analytically, though.

    • @KPkiller1671
      @KPkiller1671 6 років тому

      Max Haibara you will find in machine learning, if something is faster to differentiate your model can be learned faster. Also relu has a neat advantage over all other activation functions.
      You can construct residual blocks for deeper networks. If the network has too many layers, an identity layer would be needed somewhere. Relu allows the network to do this easily by just seeing the weight matrix of that layer to 0s.

  • @taranciucgabrielradu
    @taranciucgabrielradu 2 роки тому

    Funny how I knew literally every single thing in more detail because well... I have a computer science Masters degree. But I still stayed for the entertainment you provide because yes baby

  • @NHSynthonicOrchestra
    @NHSynthonicOrchestra 6 років тому

    Here’s an idea, what about putting an AI against a Rhythm game like guitar hero/clone hero or any Rhythm game like necrodancer. Could you possibly make an AI to complete a game?

  • @deepslates
    @deepslates 4 роки тому +6

    I didn’t understand the “oversimplified” explanation.
    Imagine neurosciencists

  • @chthonicone7389
    @chthonicone7389 6 років тому +2

    Code Bullet, Ben's mother cares about his soma!
    Seriously though, I was thinking, and I think there is a reason why all of your asteroids playing AIs devolve into spinning and moving. The problem that causes this is your input mechanism. It is too simple.
    Stop me if I'm wrong, but you explained your input mechanism as having the following inputs: Whether or not there is an asteroid (possibly distance) in each of 8 directions around the ship, ship facing, and ship speed. I do not remember if you give it ship position at all, but that is irrelevant in my opinion.
    The problem with this setup comes from the fact that to the AI, asteroids seem to vanish from moment to moment as they pass between the directions if they can fit between 2 rays comfortably. As such, the AI really isn't tracking asteroids from moment to moment.
    My solution is, what if you fed the AI distance, direction, and heading of each of the closest 8 asteroids in order from farthest to closest. This will allow the AI to have some object persistence, as well as actual tracking for the objects. Likely the AIs will be able to develop more complex strategies as a result.
    The overhead of such an approach is that you have about 3x the number of inputs, and while it's a linear increase in the number of inputs, it may result in an exponential increase in the number of neurons. However, a good GPU will likely be able to handle this.
    I would be interested in how this affects the AIs you bred, and whether or not they develop more intelligent techniques with the information they would be given.

  • @blueyay
    @blueyay 6 років тому

    I watched your snake fusion video and I was wondering what would happen if you fused some AIs from different games and saw what happened...
    Perhaps introduce the fused AIs to a whole new game none experienced before.....

  • @_aullik
    @_aullik 6 років тому +37

    Can you please upload your Dino code to your github

  • @cpp_projects
    @cpp_projects 4 роки тому

    Very helpful:)

  • @williampelletiervslol1949
    @williampelletiervslol1949 5 років тому

    #ask will you make tutorials on how to code ai's
    and how did you started making them
    i dont see good turorials on how to code them

  • @elcidbob
    @elcidbob 4 роки тому

    Seems like it would be more effective, albeit more complicated to implement how with neurons there's not two states, there's three. Neurons always fire. They have what's known as a resting rate which is just the rate it fires with no external influence. When inputs stimulate the neuron, it fires at a faster rate. When an input inhibits a neuron, it fires at a slower rate. When there's no input or the inhibitory input equals the stimulating input, there's no change.

  • @bl4ckscor3
    @bl4ckscor3 6 років тому

    That perfect checkmark

  • @biteshtiwari
    @biteshtiwari 6 років тому

    Keep posting on AI .I want to learn AI coding and you have the natural teaching abilities.

  • @lostbutfreesoul
    @lostbutfreesoul 6 років тому +1

    I still can't drop this idea of training two networks in a pred-pray format, and letting them go at it for a while....

    • @liam6550
      @liam6550 5 років тому +1

      and see what tactics each ai comes up with

  • @KernelLeak
    @KernelLeak 6 років тому +6

    6:30 / 11:55 RIP headphones... :(
    Maybe run your audio clips through a filter like ReplayGain that tries to make sure the audio has about the same overall volume before editing all that stuff together?

  • @supremespark2454
    @supremespark2454 6 років тому

    This is pretty simple
    For those who are a bit lost think of it as a filter or a yes/no gates you have to pass through

  • @abdelbarimessaoud242
    @abdelbarimessaoud242 6 років тому

    hello I am new to this channel checked Brilliant already started learning ANN but I wonder how to actually program with it what program do you use how do I learn to code it and such thank you in advance

  • @michaelboyko8083
    @michaelboyko8083 5 років тому

    At 9:59 when you re-added the bias neuron wouldn't you subtract 1 from the second-row neurons too or am I just confused.

  • @SammyDoesAThingYT
    @SammyDoesAThingYT 6 років тому +2

    Question: When mutating the candidates in a population, how many mutations do you give to each candidate?

    • @KPkiller1671
      @KPkiller1671 5 років тому +1

      You usually don't dish out an amount of mutations. Quite often you will have a mutation rate of about 1%, however it is not set in stone. Another popular aproach is to mutate at a rate of 1/successRate. Hope this helps :)

  • @janwarchocki6154
    @janwarchocki6154 6 років тому

    Do you use OpenAI in some programs ?

  • @jamescollins8643
    @jamescollins8643 4 роки тому

    Can you do an example for your actual projects, AI learning to play games rather than image recognition.