Introduction to Neural Networks in Python (what you need to know) | Tensorflow/Keras

Поділитися
Вставка
  • Опубліковано 7 чер 2024
  • Join the Python Army to get access to perks!
    UA-cam - / @keithgalli
    Patreon - / keithgalli
    In this video we start by walking through some of the basics. We look at why we use neural networks and how they function. We do an overview of network architecture (input layer, hidden layers, output layer). We talk a bit about how you choose how many hidden layers and neurons to have. We also look at hyperparameters like batch size, learning rate, optimizers (adam), activation functions (relu, sigmoid, softmax), and dropout. We finish the first section of the video talking a little about the differences between keras, tensorflow, & pytorch.
    Next, we jump into some coding examples to classify data with neural nets. In this section we load in data, do some processing, build our network, fit our data to it, and then finally evaluate our model. The examples get more complex as we go along. Some setup instructions for the coding portion of the video are found below.
    To install Tensorflow, download Anaconda: docs.anaconda.com/anaconda/in...
    Data & code used in tutorial: github.com/KeithGalli/neural-...
    I’m going to post a follow up video to this soon where we walk through a real world example where we automatically classify images of hands for the game of rock, paper, scissors. Hopefully that should be up about 2 weeks from now. (EDIT: part 2 has been posted, link below)
    If you enjoyed this video, make sure to like & subscribe. Feel free to leave any questions in the comments section.
    Part 2!
    • Real-World Python Neur...
    ------------------------------
    Finally by Loxbeats / loxbeats
    Creative Commons - Attribution 3.0 Unported - CC BY 3.0
    Free Download: bit.ly/FinallyLoxbeats
    Music promoted by Audio Library • Finally - Loxbeats (No...
    ------------------------------
    Video timeline!
    0:00 Video overview
    1:34 Why use neural networks
    3:08 How neural nets work (architecture basics)
    6:11 Hyperparameter overview (batch size, optimizer, dropout, learning rate, epochs)
    7:53 How do we choose layers, neurons, & other parameters?
    9:08 Why do we need an activation function?
    10:20 What activation function should I use?
    11:25 Keras vs Tensorflow vs PyTorch
    12:30 Coding starts (github & setup)
    14:07 Writing our first neural network (linear example)
    18:45 Selecting optimizer & loss function (model.compile)
    23:45 Fitting training data to our model (model.fit)
    27:31 Shuffle order of training data
    30:12 Evaluate model on test data (model.evaluate)
    32:00 Example #2: Classifying quadratic data
    36:06 Example #3: Classifying 6 clusters of data (try on your own)
    41:03 Using network to predict a single data point (model.predict)
    43:27 Example #4: Classifying multiple labels at a time (BinaryCrossentropy loss)
    55:19 Example #5: Classifying our complex data from start of video
    59:00 Conclusion & Next steps of learning neural nets
    ---------------------
    Follow me on social media!
    Instagram | / keithgalli
    Twitter | / keithgalli
    Practice your Python Pandas data science skills with problems on StrataScratch!
    stratascratch.com/?via=keith
    If you are curious to learn how I make my tutorials, check out this video: • How to Make a High Qua...

КОМЕНТАРІ • 148

  • @KeithGalli
    @KeithGalli  3 роки тому +19

    Hey everyone! Quick update, looks like the code for example #4 (clusters_two_categories) has been causing some people issues. Running the code that I demo in the video is resulting in low accuracy scores. I'm guessing something changed with one of the libraries used behind the scenes.
    I challenge you to try to rewrite this network from scratch and see if you are able to classify the data properly. If you are able to do this, please let me know what you changed! I want to share with everyone who is running into this problem. I'm happy to give you a shoutout if you find a solution :).
    A few suggestions that might help as you try to rewrite the network... You'll see some immediate performance boosts if you normalize the data between 0 and 1 instead of the range it currently is in. I also recommend playing around with the hyperparameters to the network (number of layers, neurons per layer, learning rate, loss function, optimizer, etc.). Maybe try using different methods to vectorize the data. Let me know if you are able to find a solution!

    • @viciousJavad
      @viciousJavad 3 роки тому +2

      and for giving you a hint on how to address such problems in the future, first look at the loss value which in this case is pretty small value (less than 1) and in each EPOCH it is getting smaller and smaller this is an indication that you neural network is working absolutely fine but if you see substantially low accuracy values it means you are measuring in the wrong format and you've got to change your METRICS measurement.

    • @viciousJavad
      @viciousJavad 3 роки тому +21

      i wrote the reply i don't know why it was removed !?
      this time i'm showing the answer right away,
      Change this:
      metrics=['accuracy'])
      to this:
      metrics=[tf.keras.metrics.BinaryAccuracy()])

  • @jonpounds1922
    @jonpounds1922 4 роки тому +76

    An hour of free online teaching from the best there is. I'm not sure people know how lucky they are.

  • @mohammedalzamil7191
    @mohammedalzamil7191 4 роки тому +27

    youre not like the other tutorial bois. you explain like youre talking to a person and not a robot in taking info and the amount of useful info that makes you understand perfectly is extremely balanced and i just love you and please keep posting. you dont know how many people you help , and the help is freakin amazing too. love u

    • @KeithGalli
      @KeithGalli  4 роки тому +3

      Appreciate the support! 🙌

    • @markteeterborough6043
      @markteeterborough6043 4 роки тому +2

      Couldn't have said it better myself! I've been watching his stuff since the beginning. I'm a long time viewer, first time commenter. LOL! Lotta love in this thread, keep it up bois! Happy pride month to ALL!

  • @kccchiu
    @kccchiu 2 роки тому +6

    At 38:30, instead of creating a dictionary and applying a lambda function, the easier way would be
    train_df['color'] = pd.factorize(train_df.color)[0]
    test_df['color'] = pd.factorize(test_df.color)[0]
    And awesome tutorial! Out of all tutorials on UA-cam, your tutorials are the easiest to follow and I always follow from start to finish.

  • @KeithGalli
    @KeithGalli  4 роки тому +24

    Video timeline!
    1:34 - Why use neural networks
    3:08 - How neural nets work (architecture basics)
    6:11 - Hyperparameter overview (batch size, optimizer, dropout, learning rate, epochs)
    7:53 - How do we choose layers, neurons, & other parameters?
    9:08 - Why do we need an activation function?
    10:20 - What activation function should I use?
    11:25 - Keras vs Tensorflow vs PyTorch
    12:30 - Coding starts (github & setup)
    14:07 - Writing our first neural network (linear example)
    18:45 - Selecting optimizer & loss function (model.compile)
    23:45 - Fitting training data to our model (model.fit)
    27:31 - Shuffle order of training data
    30:12 - Evaluate model on test data (model.evaluate)
    32:00 - Example #2: Classifying quadratic data
    36:06 - Example #3: Classifying 6 clusters of data (try on your own)
    41:03 - Using network to predict a single data point (model.predict)
    43:27 - Example #4: Classifying multiple labels at a time (BinaryCrossentropy loss)
    55:19 - Example #5: Classifying our complex data from start of video
    59:00 - Conclusion & Next steps of learning neural nets
    Thank you for watching! If you enjoyed, remember to throw this video a like & consider subscribing if you haven’t already :).

    • @anizobaobinna9238
      @anizobaobinna9238 4 роки тому

      Which IDE are you using for the coding Keith? My Jupyter notebook doesn't really have Autocomplete like the one you are using and I really want to get one that can come up with the possible results once you type in a word just like yours does.. is that Pytorch?

    • @88Timur88Bahmudov88
      @88Timur88Bahmudov88 4 роки тому

      Hey, Keith, I really like your videos, I have a little advice - now if you put timecodes in the description too, then on the computer you are able to see timeline being divided into chunks with the name of that timecode, try it, it's really convenient (you can just copy timecodes from the comments to the description, it doesn't matter in which part of description you put them) I hope you will consider doing it, thanks for your great videos! :))

  • @iamwangwe
    @iamwangwe 4 роки тому +8

    Thanks, Keith, everything I learn from your tutorials always sticks...looking forward to the follow-up video,

    • @KeithGalli
      @KeithGalli  4 роки тому

      Happy to hear that man! :)

  • @bryanchambers1964
    @bryanchambers1964 3 роки тому +1

    The lambda function color map, just elegant, clear, and beautiful. Great work, thanks.

  • @carrocesta
    @carrocesta Рік тому

    I really appreciate how easy you follow you make this lesson, thanks dude!

  • @GhizlaneBOUSKRI
    @GhizlaneBOUSKRI 3 роки тому +1

    A pure knowledge hour delivered in such lucid a manner that makes one feel comfortable and know that even if they mess up things, it can still be ok.
    No Data Science taboos when Keith is here.

  • @dana6006
    @dana6006 3 роки тому +2

    this man will literally be the reason I get employed

  • @MrKhaledpage
    @MrKhaledpage 2 роки тому

    I feel lucky to finde such a good vedio free on youtube. Thanks alot Keith!

  • @asfasdfsd8476
    @asfasdfsd8476 Рік тому

    I found my current job because of your vids. Thanks man

  • @thebeston6710
    @thebeston6710 4 роки тому

    Yes! Finally another video. I was waiting for ages

  • @b.f.skinner4383
    @b.f.skinner4383 3 роки тому

    Understanding the tf documentation can be overwhelming when first trying to learn the library but this was really helpful and explained clearly, thank you

  • @hiukecil
    @hiukecil 3 роки тому

    thanks so much bro. I love all your tutorial videos. As always, I learn so much from this tutorial. I am looking forward to your future videos

  • @tech4028
    @tech4028 4 роки тому +3

    Love your videos. I'm a high school student and I really like your vids compared to other resources I've used so far.

    • @KeithGalli
      @KeithGalli  4 роки тому

      Glad to hear it! I appreciate the support :)

  • @prasadshiva3538
    @prasadshiva3538 4 роки тому

    Best video keith and eagerly awaiting for next one

  • @nirajbasyal1159
    @nirajbasyal1159 4 роки тому +2

    Sir you are awesome thanks a lot sir please please continue this deep learning with keras and tensorflow

  • @mirayakinci4423
    @mirayakinci4423 4 роки тому

    Your explanation is very clear. Thanks for works.

  • @huhuboss8274
    @huhuboss8274 4 роки тому

    amazing video as always

  • @reubenrapose1817
    @reubenrapose1817 4 роки тому +2

    Damn good! Thanks for this! Please cover more complex NNs with examples. Like RNNs CNNs GANs

  • @tahamunawar287
    @tahamunawar287 3 роки тому

    Thanks for your tutorial I am looking forward to become a data scientist by your tutorials
    you should deserve more subscribers

  • @ngovietluong5934
    @ngovietluong5934 2 роки тому

    this guy's going to heaven on first class

  • @symnshah
    @symnshah 3 роки тому

    Thanks, Geith for such a nice video on the subject. Please make a video on Generative Adversarial Networks (GAN).

  • @taytayley
    @taytayley 4 роки тому

    Thank you Keith you have done aws👏ome public service.

  • @aishapervaiz9409
    @aishapervaiz9409 4 роки тому +1

    Keith, I did Python Course from Coursera and was looking for practicing in Data Science and Machine Learning. Luckily i found your channel and have already gone through videos of Pandas, Numpy and Sales analysis videos related to data science. Feels Super motivated after that as i learned so many things. Kindly upload more real world projects, with deeper analysis required. Love your skills, it is helping me and polishing my skills, developing my Github profile :D Now jumping to ML and AI related topics as I want to switch my career to DS and ML, so to be successful in that learning ML models are key. Also, following Andrew Ng. Can you refer me good books or resources where you learnt from. I would be glad.. Best Online Teaching Channel Ever.

  • @virendartripathi4645
    @virendartripathi4645 4 роки тому

    Yea bro Plzz make videos more often I learn many things from Ur video's tq very much bro love Ur video's

  • @shouryagupta7624
    @shouryagupta7624 4 роки тому +2

    Pls make a series on data structures and algorithms.

  • @rohitpokhrel8840
    @rohitpokhrel8840 2 роки тому

    Great tutorial, thanks! :)

  • @DK-dp3kk
    @DK-dp3kk 4 роки тому

    Dude you are solid. Great, clear explanations. Keep it up.

    • @KeithGalli
      @KeithGalli  4 роки тому

      Thank you man! Glad you enjoyed

    • @DK-dp3kk
      @DK-dp3kk 4 роки тому

      @@KeithGalli have you done any CNN videos? I couldn't find any? I am working on a CNN project and would love to loop that into GAN at the end . . . .

    • @DK-dp3kk
      @DK-dp3kk 4 роки тому

      @@KeithGalli you are my python guru so i thought i'd ask - for training ML models is there a good single board computer? I think you can do it on the Jetson Nano or TX2 but I wondered if you had any thoughts and maybe even some benchmarks>

  • @titowoche
    @titowoche 3 роки тому

    Really nice video

  • @twiggygarcia5096
    @twiggygarcia5096 4 роки тому

    Great video!

  • @datastako156
    @datastako156 2 роки тому

    thanks you sir learned alot from you!

  • @neila.9195
    @neila.9195 4 роки тому +1

    Dope video bruv !

  • @jayyijianyi
    @jayyijianyi 4 роки тому

    thanks a lot keith, im doing AI at my uni and you are teaching way better than my lecturers. Do you think you would do a tutorial guide on Genetic Algorithm or intelligent agent too? Appreciate your effort!

  • @akshatkhare7938
    @akshatkhare7938 4 роки тому +4

    Can you please continue on deep learning and neutral networks tutorials

  • @kanitdas8394
    @kanitdas8394 4 роки тому +2

    Hey Keith thanks for uploading this one!!
    I wanted to learn from you so I sent you a DM over IG (If you remember) and you posted it within 3 days.
    Thanks a ton!!

  • @MrKrishnalovesyou
    @MrKrishnalovesyou 4 роки тому +1

    Thanks buddy.... hope u r doing well. Stay safe !✌️

    • @KeithGalli
      @KeithGalli  4 роки тому

      You're welcome! :). Hope all is going well for you too!

  • @lokguanlim7420
    @lokguanlim7420 4 роки тому

    Thanks for the tutorial :D

  • @prof_albert
    @prof_albert Рік тому

    Just as I say , the best man . Bravo bro 🙏💞👌🤩🇺🇲

  • @moseskioko7820
    @moseskioko7820 3 роки тому

    good content ,thanks . would you mind putting up some content of Neural Networks with pytorch kindly

  • @iyar220
    @iyar220 Рік тому

    I've added this to a watch later playlist, but I'd like to ask when should I watch this?
    I'm currently taking a scikit-learn course, while also doing the Andrew Ng machine learning specialization course series. I am in the second week of the first andrew ng course- so when should I start learning about neural networks? I presume after the first and second courses in andrew ng's 3 course series?

  • @vaibhav-_-5462
    @vaibhav-_-5462 3 роки тому +1

    Thanks bro love from india

  • @shadow1403piros
    @shadow1403piros Рік тому

    Really enjoyable to watch! Just a quick question, how is it that for examples #1 and #2 you set the last layer to have 2 nodes with sigmoid activation functions? For binary classification, shouldn't you use either 2 nodes with 'softmax' or 1 node with 'sigmoid'? How is it that Tensorflow is not throwing any errors when you are passing just 1 ground truth label (0 or 1, from train_df.color.values) but the last layer is predicting 2 numbers (each between 0 and 1)?

  • @ericwxng
    @ericwxng 4 роки тому

    just got out of a course on this, this is some good stuff >:)

  • @mrlovalovaa
    @mrlovalovaa 4 роки тому

    Thanks keith 😊

  • @mudsky
    @mudsky 2 роки тому

    THANK U

  • @himansusugra9496
    @himansusugra9496 3 роки тому

    For problem #4 - I used CategoricalCrossentropy instead of BinaryCrossentropy and put the the model architecture as Dense(64, input_shape=(2,),activation="relu")->Dense(64,activation="relu")->Dense(64,activation="relu")->Dense(32,activation="relu")->Dense(32,activation="relu")->Dense(9,activation="sigmoid") and set epoch=20 . I was able to achieve accuracy of 93% . I was using Jupyter notebook.

  • @naveenprakash3640
    @naveenprakash3640 4 роки тому

    Thanks Keith....

  • @raspberrypi4970
    @raspberrypi4970 4 роки тому

    Try Leap2 from D-Wave

  • @wiz8058
    @wiz8058 4 роки тому

    Great Keith has done it again. Big ups bro keep all moving and well done

    • @KeithGalli
      @KeithGalli  4 роки тому

      Always appreciate your support buddy!

  • @EOh-ew2qf
    @EOh-ew2qf 3 роки тому +2

    clusters with two categories are stuck at 0.3~0.4 accuracy even when using binary cross entropy

    • @viciousJavad
      @viciousJavad 3 роки тому +1

      yeah Faced the same problem i am trying to figure out how to solve this crazy undefitting

  • @z0mbielollipop
    @z0mbielollipop 4 роки тому

    Great vid! I don't suppose anyone has the code to change the predicted value to a color output? ie [0,1] to red/blue etc?

  • @manu93ize
    @manu93ize 4 роки тому

    HEY
    really like your content I have learned a lot form your videos. your teaching style is very easy to follow for even a total beginner like me.
    can you make a video data scraping: beautiful soup?

    • @KeithGalli
      @KeithGalli  4 роки тому +2

      Trying to make one on beautiful soup soon!

  • @saiprasanthbabukondrakunta6843
    @saiprasanthbabukondrakunta6843 3 роки тому

    can you do a video on image classification using grayscaled multi frame tiff images....

  • @Aditya-zf7ch
    @Aditya-zf7ch 4 роки тому

    Sir you are great

  • @collinguidry9867
    @collinguidry9867 4 роки тому

    Thank you

  • @juank46983
    @juank46983 4 роки тому

    Could you help: How can I assign the accuracy to a variable, like a=accuracy or a = binary_accuracy. This is part of the code of the neural network:
    model = Sequential()
    model.add(Dense(48, input_dim=48, activation='relu'))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(2, activation='sigmoid'))
    model.compile(loss='mean_squared_error',
    optimizer='adam',
    metrics=['binary_accuracy'])
    model.fit(training_data, target_data, epochs=1000)
    scores = model.evaluate(training_data, target_data)
    "training_data, target_data are arrays"
    Result of evaluation:
    binary_accuracy: 0.5000
    binary_accuracy: 50.00%

  • @zer0r01
    @zer0r01 3 роки тому

    I googled about softmax and it says that it can deal with multi-classifications .. how you write that it can deal with single only while sigmoid can deal with multi?

  • @saurrav3801
    @saurrav3801 4 роки тому

    You are awesome bro ....

    • @KeithGalli
      @KeithGalli  4 роки тому

      I appreciate the support 🙌

  • @QuantumWithAnna
    @QuantumWithAnna 3 роки тому

    np.random.shuffle(dataFrame.values) does not actually shuffle the data frame. I checked. One way to shuffle a dataframe is to get the result of dataFrame.sample(n)

  • @shitmusic8117
    @shitmusic8117 3 роки тому +3

    Keith, when I'm running your code on network clusters with 2 categories (network_clusters_2.py) I only get acurracy of about 0.1
    Do you know what might be the problem?

    • @shitmusic8117
      @shitmusic8117 3 роки тому

      I've noticed my computer is only processing 1500 rows of data:
      Epoch 10/10
      1500/1500 [==============================] - 1s 382us/step - loss: 0.6379 - accuracy: 0.1952
      That might be a case why it's so inacurrate!

    • @KeithGalli
      @KeithGalli  3 роки тому

      @@shitmusic8117 Not sure, but a few others have mentioned they are running into similar issues. I just pinned a new comment requesting help from viewers to find a solution to the issues you are experiencing!

  • @hrushikesavarajusangaraju3689

    Can you provide right code for building and modeling of household power consumption.....
    # Build and train a neural network to predict time indexed variables of # the multivariate house hold electric power consumption time series dataset. # Using a window of past 24 observations of the 7 variables, the model # should be trained to predict the next 24 observations of the 7 variables. # #

  • @denniskara1221
    @denniskara1221 3 роки тому +1

    Hello Keith i love your tutorials!! 1 question about this one: I'm using jupyter notebook and i get very low accuracy in example #4. I just copied pasted your code, all data are loaded right and i dont know whats wrong.. Any ideas ?

    • @KeithGalli
      @KeithGalli  3 роки тому +2

      Hey Dennis, glad you like the videos! This is an issue that a couple other viewers brought up to me. I looked into it a bit and I was able to recreate the issue on my end, but wasn't able to debug why it was happening. I think something changed recently with one of the libraries, but I didn't have the time to figure out exactly what.
      I think I'll pin a new comment to this video over the weekend letting people know about this issue and challenging people to construct a new network to properly classify this data. If people are able to find new solutions, I'll be sure to share how they did this.

    • @InfiltratorChris
      @InfiltratorChris 2 роки тому

      @@KeithGalli Hey, I've been looking into this and found out that the accuracy is calculated wrongly. I got it to work with an actual accuracy of 99. Hit me up if you want the exact code. Also try if yourself by just comparing the predicted labels and the testing labels :) hope that helps

  • @MuhammadKashif-zt2fm
    @MuhammadKashif-zt2fm 3 роки тому

    Keith Galli. Plz provide the source from where you have taken the dataset.Thanks.

    • @KeithGalli
      @KeithGalli  3 роки тому

      I created the data used in the video!

  • @raunak51299
    @raunak51299 3 роки тому

    Wish I found this gem earlier.

  • @jackoverstreet2835
    @jackoverstreet2835 4 роки тому

    I have a question. When you check your model's performance on the validation data, that would allow the model to see that data. Wouldn't that increase its performance the next time it sees the validation data? (Like if you tune some parameters and try again) Then that would make its future performances on the validation data not an accurate measure of its performance. That's why there is test data right? If my thinking is correct then: Is that inaccuracy so small that it can be ignored when you are tuning your hyperparameters? Or, is there no artificially increased accuracy for validation data when you re-create and re-fit the model?

    • @KeithGalli
      @KeithGalli  4 роки тому +1

      Great question. You have good intuition here. The answer comes with the difference between "model.fit" & "model.evaluate". When we call model.fit, data is passed through our network, predictions are made, and then network parameters are updated based on its performance of these predictions. In the model.evaluate method, however, we freeze all of our learned network parameters. This means that we never update the model no matter how much data is passed through it. As a result of freezing these parameters, future performance is not affected by how many times you run the validation data through the network (as long as you are using model.evaluate). Also worth noting, network parameters are not updated when you use model.predict() either. Hope this helps!

    • @jackoverstreet2835
      @jackoverstreet2835 4 роки тому

      @@KeithGalli Thank you for replying, it makes a lot more sense to me now. After reading your response I was still confused on why you need a test set if the validation set is already giving an unbiased accuracy. I searched online and what I found seemed to make a lot of sense which is:
      The test data is needed because the validation data accuracy is has been improved by adjusting hyperparameters in the model (like the layers, batch size, epochs, activation function, and dropout). Because of that we need the test set to make sure that the model works on a fresh set of data, just in case the hyperparameters were adjusted in a way that only worked well with the validation data set.
      Feel free to correct me if that's wrong because I'm learning these concepts for the first time.

    • @KeithGalli
      @KeithGalli  4 роки тому +1

      Yep you're correct here. Ultimately if you spend enough time tuning hyperparameters you might overfit the validation data, so having a separate test set can be a good sanity check that things will generalize well across all examples in the wild.

  • @wt7658
    @wt7658 3 роки тому +1

    11:19 isn't it softmax is for multi-class and Sigmoid is for binary (yes/no type)? Thanks!

    • @KeithGalli
      @KeithGalli  3 роки тому +1

      Yeah that's correct, I could have used to clarify what I meant a bit in the video. So with softmax if you have 10 classes, you would use it to predict a single label of the 10 classes. With sigmoid I said it was multi-class because with each of those same 10 classes it would independently be classifying them with yes/no (binary) so in total you might have multiple classes in your predictions. Hope this makes sense.

    • @wt7658
      @wt7658 3 роки тому +1

      @@KeithGalli Thanks. That make sense now. I would also like to add that for binary label such as cats/ dogs, both softmax and sigmoid would work with different losses in the compile.
      1, tf.keras.layers.Dense(2, activation = 'softmax') --> model.compile(loss = 'sparse_categorical_crossentropy')
      2, tf.keras.layers.Dense(1, activation = 'sigmoid') --> model.compile(loss = 'binary_crossentropy')
      hope it helps. :)

  • @_matis_
    @_matis_ 3 роки тому

    Hi Keith,i am learning python,and with that i went through your numpy and pandas tutorial.I want to apply some sort of machine learning in trading.Can you recommend me something that isnt too complicated for a beginner?Your pandas and numpy tutorials had a good pace and they were not too hard.I watched 12 min of this video and it seems so intimidating.Can you help please?Thank you

    • @KeithGalli
      @KeithGalli  3 роки тому +1

      Yeah the neural nets stuff definitely can be a bit intimidating at first. For a potentially better paced machine learning tutorial, I recommend checking out my sklearn video: ua-cam.com/video/M9Itm95JzL0/v-deo.html. I think it is easier to digest than this one.

    • @_matis_
      @_matis_ 3 роки тому

      @@KeithGalli Thank you very much,will look into it.Can you point me to a trading tutorial/source that involves ml and that isnt that hard as your numpy and pandas tutorials?Thx

  • @sebastianalvarez1537
    @sebastianalvarez1537 4 роки тому

    beast

  • @aiyanyorscott6169
    @aiyanyorscott6169 4 роки тому

    sorry if i may ask how long have you been coding that make you this good, i mean how long can it also take me to be very good in python

    • @KeithGalli
      @KeithGalli  4 роки тому

      I've been coding for a bit over 6 years. I would say it took me a little over a year for things to really start clicking, but that amount of time can be more or less depending on the person.

  • @OriginalBernieBro
    @OriginalBernieBro 4 роки тому

    14.59 holy shit. Grabs new ROG G14 installs sublime, has kite already installed with vs code and atom.

  • @SSSNIPD
    @SSSNIPD 4 роки тому

    Can you please make a setup tour?

    • @KeithGalli
      @KeithGalli  4 роки тому

      Potentially! Is there anything in particular you would be most interested in seeing?

    • @SSSNIPD
      @SSSNIPD 4 роки тому

      @@KeithGalli mainly you computer specs for ML and your keyboard :)

  • @noelthomasbejoy3089
    @noelthomasbejoy3089 3 роки тому

    is tensorflow cpu enough?

  • @duartepombo551
    @duartepombo551 2 роки тому

    should i use machine learning to optimize what hyperparameters to use in the model that optimizes the hyperparameters to use in the model that optimizes the hyperparameters to use in the model that optimizes the hyperparameters to use in the model that optimizes the hyperparameters to use in the model????????????

    • @KeithGalli
      @KeithGalli  2 роки тому +1

      Hahahaha yes of course, but you gotta go like 10 more levels deep 😂

  • @x33LALAx
    @x33LALAx 4 роки тому +1

    When I run your clusters with two categories code, I get about 10% accuracy (while you get above 90% accuracy in the video), any idea what is wrong?

    • @KeithGalli
      @KeithGalli  4 роки тому

      hmm my thought is that you might not have changed the loss function? Either you're not using BinaryCrossentropy or you're not using 'sigmoid' as activation in the last layer.

    • @abinavmuthukrishnan2899
      @abinavmuthukrishnan2899 3 роки тому

      @@KeithGalli I used BinaryCrossentropy and sigmoid yet I'm getting about 14% accuracy for clusters with two categories, Any idea what's wrong?

    • @Gibczon
      @Gibczon 3 роки тому

      @@abinavmuthukrishnan2899 Im having the same issue, i have changed to BinaryCrossentropy and im using sigmoid and after that im getting around 14-15% with 10 epoches.

    • @KeithGalli
      @KeithGalli  3 роки тому

      @@Gibczon In that case I would double check the vectors which represent the two categories and make sure the 1's and 0's are in the right place. It's also possible that something in one of the libraries used has changed recently. If I have some free time this weekend I might look into it a bit.

    • @titowoche
      @titowoche 3 роки тому

      @@Gibczon I'm having the exact same problem. Actually, i just ran Keith's code and the results are just as bad

  • @geodatacenter
    @geodatacenter 4 роки тому

    Congratulations but if possible to make formal desktop application with wizard intro

  • @SMFahim-vo5zn
    @SMFahim-vo5zn 4 роки тому

    Hi Keith!
    I thought you're just working on data manipulation tasks since your Pandas tutorial. But it seems like you're covering the same topic I'm interested in. So I was wondering if you have any study group. If yes, then I'd like to join you and proceed along with you. Don't know how much I can help you, but it might be very helpful for me. So please let me know if you have any study group or want to open a discord channel to discuss the topics. Maybe it will also help you making new videos. Looking forward to hearing from you. Good luck!

    • @SMFahim-vo5zn
      @SMFahim-vo5zn 4 роки тому

      @wise guy let's wait for his reply...

    • @KeithGalli
      @KeithGalli  4 роки тому +1

      @@SMFahim-vo5zn Sorry for the delay! I am not part of any study group right now, but I would love to do something like that in the future. Currently I'm too busy to get the ball rolling with study group/discord, but when/if my schedule frees up definitely something that I'm interested in pursuing.

  • @gabenli1330
    @gabenli1330 4 роки тому

    thank you so much bro!!! i know i might sound gay but i just want to say thank you and I love you!

    • @KeithGalli
      @KeithGalli  4 роки тому

      You're very welcome! I appreciate the support :)

  • @stuartmcivor2276
    @stuartmcivor2276 2 роки тому

    I seem to only be using 1000 rows of the training data and 32 of the test (quadratic case - it's the same for the linear).
    "Epoch 10/10
    1000/1000 [==============================] - 0s 403us/step - loss: 0.3222 - accuracy: 0.9973
    EVALUATION
    32/32 [==============================] - 0s 383us/step - loss: 0.3223 - accuracy: 1.0000"
    I have copied the file network_quadratic.py from Github and it's the same.

    • @stuartmcivor2276
      @stuartmcivor2276 2 роки тому +1

      I think I have got it - the number is the number of batches not the number of samples, although I don't know why it's differently displayed - possibly a different version.

    • @poke5443
      @poke5443 Рік тому

      @@stuartmcivor2276 Have you figured out how to make the ouput look as in Keith's video? I have the same output as you btw

  • @muritalaadebayoisah9155
    @muritalaadebayoisah9155 3 роки тому

    Thank you so much for your effort. The video really helped.
    Question: Sir, I need your guide on where to get construction management dataset relating to cost, schedule, risk, safety management, etc.
    I have been facing challenges on a dataset that I can use to practice building ANN or Deep Neural nets for project cost prediction given project features.
    Generally, since I started learning (5months ago) ANN, DL, ML, AI I discovered that the construction management data set are very difficult to find compared to other fields. I will be very glad if you can with any web link.
    Also, I would patiently wait for the part of this video.
    Thank you once again for the great job you are doing.

  • @zer0r01
    @zer0r01 3 роки тому

    I think you mistaken between the softmax and sigmoid!

  • @Picker22
    @Picker22 4 роки тому

    love to watch your content great one can u tell me if they will be any grey hat python or black hat for python videos thank you

  • @jeanzyx1707
    @jeanzyx1707 2 роки тому

    gg

  • @jaydeepjoshi3412
    @jaydeepjoshi3412 3 роки тому

    whats next now?????/

  • @anonviewerciv
    @anonviewerciv 2 роки тому

    Num pee.

  • @realneosi
    @realneosi 4 роки тому

    zzz

  • @sandhiyas5935
    @sandhiyas5935 3 роки тому

    Its really hard to understand!! Totally baffles try to make vdo's simpler to understand