After re-watching, I notice I never explained "epoch." Epoch is just a "full pass" through your entire training dataset. So if you just train on 1 epoch, then the neural network saw each unique sample once. 3 epochs means it passed over your data set 3 times.
Epoch is just a fancy way to say iteration. Generally speaking, iteration/epoch is used in optimization where weights are updated after each time step. Usually best to learn gradient decent prior to deep learning.
I like you Sentdex, but there was more than just the word epoch that you didn't explain. For example, what is x_test, y_test and why do we have something different in x_train and y_train? What does flatten mean? What is an optimizer, what does it do? To me there are also several other things left unexplained and sort of swept under the rug. I believe you have the best of intentions and you do help a lot of people in becoming better programmers in several different ways. I just think that you have two options by making these kinds of videos, and the first is that you go through things so that people not just vaguely understand things but actually master them and really understand them. The second option I believe is that you say that I'm going to assume you know the basics of neural networks: and if you don't, go check Andrew Ng out and then come back to my videos.
This guy is really authentic and legit. Doesn't beat around the bush, no cringy intro music, just straight to the point. Best Intro of Deep Learning I have watched so far!
This is exactly what I needed as the first clip on TensorFlow and deep learning. Consider this comment as a warm "thank you" and a remote (but firm) hand shake!
@@yelmak only 5? bruh that shit took me 24. And I ended up using the same solution i had at the start, with out realizing that was the installation "facepalm"
I learned more from this video than I did in an entire deep learning course I took last academic quarter. Huge thanks, my dude :^) this is going to help a ton for my thesis work!
In later versions of tensorflow, you need to specify input shape of flatten layer if you are reusing the saved model. model.add(tf.keras.layers.Flatten(input_shape=(28, 28))) instead of model.add(tf.keras.layers.Flatten())
For people getting error while loading the saved model, use the activation functions of keras, not of tensorflow and specify input_shape of the first layer. Following are the code changes - from keras import activations model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten(input_shape=(28,28))) model.add(tf.keras.layers.Dense(128, activation=activations.relu)) model.add(tf.keras.layers.Dense(128, activation=activations.relu)) model.add(tf.keras.layers.Dense(10, activation=activations.softmax))
Nice vid. So many videos ramble on side thoughts and spend 20 minutes explaining their hypothetical use case. This dives directly into it. Thank you!!!!
Man, I could kiss you. I've spent the better part of a week trying to find an up to date tutorial on keras where the person actually types the code out and explains what it does as he does, and this is exactly what I've been looking for. Very helpful!!
Hey Harrison, I just wanted to thank you for making these amazing tutorials! I finished your beginner tutorial, and while it went pretty deep into some things, I have a great toolkit that I can use to explore web development, data analysis, and cyber security! I love programming and your videos are some of the most helpful resources that I have been privileged to discover. I promise I will take this knowledge and apply it to something that will change the world! :D
I have listened the entire lecture while driving The way coding part was done by reading it loud while typing was superb ! I had the whole picture even without looking at the screen. Bundle of thanks
I am so happy you are doing an update on this topic right after i finished your main machine learning playlist. Your videos have been very helpful, thank you so much for your work !
17:50 Actually, predictions = new_model.predict([x_test]) broke the program, then I changed it to predictions = new_model.predict(x_test) And it worked Fine :)
This was excellent. I’m a novice in ml and this is easy to follow and understand. Please do more. I’ve gotten lost in other people’s tutorials and videos that I just lose motivation and interest because they don’t simplify concepts like you do. Keep it up! Also appreciate the use of iPython nb.
I just finished all 43 tutorials of your "Machine Learning with Python" series, when it came to Tensorflow, I thought need to watch another tensorflow2 series just to continue that series tutorials. But luckily you saved the day. I really love your tutorials, learning from the scatch. Love you man, have a blast.......😍😍😍
Hey Man , do we need to watch the old tensorflow series , or should this be enough ? just asking coz the older series had 30 videos and this one has 11.. any significant thing I will be missing out on if I skip the older series?
@@shauryavardhan7225 The previous tutorials series has musch deeper explanation of how tensorflow works & can be implemented in Machine learning, its very easier to understand for a true beginner but tensorflow 2.0 is musch easier to implement, so why learn 1.0, right! so currently i am learning only the basics syntex of 2.0 from other youtube tutorilals, then i will move back to the previous "Machine Learning" to implement tensorflow. Honestly i watched only 5 of this series & it feels kindda hard to catch up , means not so details here. But you can give this series a try & see how it works for you.
This video is awesome! period. I've literally seen so many seminars and videos on UA-cam... This is the first one that gave me all the comprehensive details, along while making the entire program... on the side ... Thanks. It really helped.
i know im a bit late but @sentdex you are a legend my guy, i got the e-book of nnfs and this video series also makes it even easier to make neural networks, no intro music that blasts your ears, straight to the point, thank you!
The fact that he Explained reasons behind every step is the most pleasing. Now I have a good understand of how Neural Network works. Thanks a lot sir. I subscribed to your channel already Keep the good job sir
Anyone experiencing this error: "ValueError: You are trying to load a weight file containing 3 layers into a model with 0 layers. ". Change this line: model.add(tf.keras.layers.Flatten()) to: model.add(tf.keras.layers.Flatten(input_shape=(28, 28))) Should resolve the problem.
I came across an issue when trying to print val_loss, val_acc it was complaining about the shape being 10000 and needed to be 60000 seems to work by normalising using x_train, x_test = x_train / 255.0, x_test / 255.0
Yeah, I had the same problem. You have to comment: #x_train = tf.keras.utils.normalize(x_train, axis=1) #x_test = tf.keras.utils.normalize(x_train, axis=1) and write x_train, x_test = x_train / 255.0, x_test / 255.0 instead. Thanks man!
GREAT JOB! You explained it very well, thanks a lot. Thats exactly what I searched for! However, I think for beginners you should also mention the parameter y and that it is the label of the respective x.
Incredible tutorial, I am familiar with TF and Keras and this is super well explained. Covered everything a learner should know. Something cool for next one is AutoML or Autokeras
I am totally beginner to deep learning, and really this is best explanation for making your first neural network model with TensorFlow, thank you very much for your great explanation.
How can I decide to pass 128 units like you did in adding 128. You said this is number of neurons. How can I decide the number of neurons according to the data I have.
Lovely lovely explaination! Lovely! What I have realized is: if you want to perfect python coding or any coding...you have got to practice!Practice!Practice!
Honestly, where do you learn all these stuff at such pace? You dont even have a cs major background. Not sure if this is impressive or a sign of my own unproductivity :( Regardless, I appreciate your existence
hey, i need help . am experiencing this error when i try to train(fit) the model. ValueError: Data cardinality is ambiguous: x sizes: 60000 y sizes: 10000 Please provide data which shares the same first dimension.
I had this same issue, but then noticed a copy and paste error when normalizing data. Make sure your x_test is normalized using x_test rather than x_train. When I updated my code to as show below, it worked #normalize data x_train = tf.keras.utils.normalize(x_train,axis=1) x_test = tf.keras.utils.normalize(x_test,axis=1)
Best TensorFlow hand-on introduction on UA-cam. (This is actually the first video of this kind I watched, but I don't think I need to watch another one.) BTW, watching this after watching 3Blue1Brown's series video on Deep Learning is a joy.
It seems to be a bug... I'm had the same problem. See the github bug report here: github.com/tensorflow/tensorflow/issues/22837 I down-graded to TF 1.10 and it solved the problem.
If you want to use the latest version you need to give the Flatten layer class the input variable input_shape like this: ` model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))`
@@Crabbpower thanks a lot man, i was getting crazy does it matter if the parameter of input shape is more or less than the original pixel size. i think it matters do you know any way to overcome this
if you're not a native english speaker and totally noob, try to watch at 0.75 speed. thank you so much for this tutorial. I have some idea for my thesis work, really hoping that this can work.
I think this is not the right place to enter the subject, he already assumes a lot. This is a place to consolidate your knowledge by implementing a demo. Sorry I can't give you a pointer to an introductory reference, but there are dozens of those.
Thanks. When you run model.fit, @15:16 it shows 60000/60000. With current version (2.15.0) that displays the batch quantity. In my case, it shows 1875/1875, since the default batch size of 32 is used, 32 x 1875 = 60000. I assume when you ran this, the number referred to the quantity of items in the dataset.
You should make separate playlist/series for basic and advanced so we noobs can level in a more linear fashion. not a total beginner to advanced but like a more explaining on a whiteboard how things work series
sentdex I really enjoy your videos, but since I haven't worked with tensorflow or similar this video threw many new functions and commands at me :D Could you maybe write some comments next to what you've done? It makes things easier to follow during and later on :) Additionally, any good updated resource to Keras/tensorflow you can recommend?
Thank you so much for this, I have started learning and done the “hello worlds” of keras and am really trying to get to some advanced stuff and hopefully eventually reinforcement learning
How can you have 60000 samples as input and 10000 as target? I get "ValueError: Input arrays should have the same number of samples as target arrays. Found 60000 input samples and 10000 target samples."
@@tingupingu3394 from another comment here: NonyaSerdo 3 meses atrás I had the same problem. I accidentally forgot to change "x_train" to "x_test" in the second normalize statement. Correcting that gave me the expected results.
I hv got an error " ValueError: The first layer in a Sequential model must get an `input_shape` argument. " on Jupyter notebook. My tensor flow is 1.5.0 Version GPU. Kindly advise solution.
Thanks sentdex, good start at least for me as a beginner. Only problem encountered was changing: predictions = new_model.predict([x_test]) to predictions = new_model.predict(x_test) as pointed out by Jamal Abo comment. Other then that it went smoothly. BTW, as a reference for anyone, i'm using Windows 10 with Anaconda Python 3.7 64-Bit installed and TensorFlow and was able to complete tutorial to the end .
I have a question and didn't really remember if you covered it, why do you use Jupyter to code in for this tutorial? Or did you explained in another video/written tutorial?
You shouldn't normalize the test data with respect to its own mean and std, as it is not supposed to be known in advance. The correct way to do is to normalize with the mean and the std of the training set. At 14:26 binary crossentropy is totally not what you explained in the video... it's not because there are two classes (cats or dogs) that you use "binary" crossentropy as it might sound. Binary crossentropy works no matter how many classes you have, and the difference between it and categorical crossentropy is that in the former case the classes are not mutually exclusive, e.g. there could be a dog and a cat in the image at the same time, and the output probabilities are independent for each class, so you can have .99 dog and .98 cat; In contrast, the latter, categorical crossentropy, means that the classes are mutually exclusive, e.g. there can be only one kind of animal in an image, so for example .79 dog and .21 cat. In mnist we use categorical crossentropy not because there are more than two classes but because there is only one kind of digit in one image.
Hey thanks for the comments. With normalization, I've seen it done in quite a few ways. You're probably right that the most statistically sound method is to normalize with the same weights as the test set. I've found it to be fine if what you're predicting is large enough to normalize again, but this is just a toy problem, so just about whatever you do here will work. With binary cross entropy, I guess I am just going to have to disagree... It's definitely used when you have just two classes.
For cross entropies, what documentation do you refer to? In tensorflow binary and categorical crossentropies are implemented with tf.nn.sigmoid_cross_entropy_with_logits and tf.nn.softmax_cross_entropy_with_logits, and these functions are called internally in keras as you can see in github.com/tensorflow/tensorflow/blob/r1.10/tensorflow/python/keras/backend.py I suggest that you refer to the official documentation of these two functions to understand what they actually mean.
Actually this video is kind of bullshit. What i thought it wants to provide, some brief introduction to first steps in keras make only 1/10 of the vide without any significant explenation. But he explains other things that are actually wrong. This guy should first know about what he is talking before doing videos teaching others nonsense.
after the battle of making the layers myself with your guide in the first tutorial set the ease of using keras is just unfair lol thanks for another great tutorial
hi Sentdex, I think I remember that I ever downloaded your Tensorflow videos, maybe in a year. However, I cannot find it now. To tell the truth, I did not understand enough for those old videos. However, these series for Keras are far more understandable. I really appreciate your contribution!
I know this is an old video and I know you're using the convention of X being input and Y being output, but I think it is important to really explain that since for this particular example your data is a plot of x and y points in a 28x28 grid. I was slightly confused for a while until I printed the y values to see that they were the outputs and not the y-coordinate data.
whenever i do the part of predictions = new_model.predict([x_test]) i get the following error: 'list' object has no attribute 'shape' i'm following the exact same steps as you did and i was carefull with any typos
Yea, it works after I removed [ ]. I am confused that Sentdex emphasizes that should put ([test]) not (test) in the video. No idea why his code doesn't work for my python environment.... @@SmartieEdits
for those who are on an AMD GPU to install Tensorflow follow these steps: 1) create a virtual environment with python 3.6 2) pip install tensorflow-directml 3) write code as: import tensorflow as tf DML_VISIBLE_DEVICES="0" #now your code here
Hey, I seem to get this error ValueError: You are trying to load a weight file containing 3 layers into a model with 0 layers. whenever I run the line of code below: new_model = tf.keras.models.load_model('epic_num_reader.model') Any hint of how to resolve this??
Check this stackoverflow.com/questions/52664110/attributeerror-sequential-object-has-no-attribute-output-names/53980337#53980337 As per the changing to "keras" from "tensorflow.keras", just remove the "tensorflow."part and rest is the same
@Jostein Dyrseth Same issue here. In case i use just >>> predictions = new_model.predict(x_test); I receive : "IndexError: list index out of range" Any idea how to fix it ?
model.save('epic_num_reader.model') new_model = tf.keras.models.load_model('epic_num_reader.model') this is my error: W0706 13:50:15.307831 3688 hdf5_format.py:263] Sequential models without an `input_shape` passed to the first layer cannot reload their optimizer state. As a result, your model isstarting with a freshly initialized optimizer. please help.
After re-watching, I notice I never explained "epoch." Epoch is just a "full pass" through your entire training dataset. So if you just train on 1 epoch, then the neural network saw each unique sample once. 3 epochs means it passed over your data set 3 times.
Epoch is just a fancy way to say iteration. Generally speaking, iteration/epoch is used in optimization where weights are updated after each time step. Usually best to learn gradient decent prior to deep learning.
At 4:44 you would want to write " > 3".
I like you Sentdex, but there was more than just the word epoch that you didn't explain. For example, what is x_test, y_test and why do we have something different in x_train and y_train? What does flatten mean? What is an optimizer, what does it do? To me there are also several other things left unexplained and sort of swept under the rug.
I believe you have the best of intentions and you do help a lot of people in becoming better programmers in several different ways. I just think that you have two options by making these kinds of videos, and the first is that you go through things so that people not just vaguely understand things but actually master them and really understand them.
The second option I believe is that you say that I'm going to assume you know the basics of neural networks: and if you don't, go check Andrew Ng out and then come back to my videos.
I want all the videoes how can I get all of these videos
I hope in a near future tutorial you add in tensorboard howto, specifically showing images with a slider!
This guy is really authentic and legit. Doesn't beat around the bush, no cringy intro music, just straight to the point. Best Intro of Deep Learning I have watched so far!
This is exactly what I needed as the first clip on TensorFlow and deep learning. Consider this comment as a warm "thank you" and a remote (but firm) hand shake!
Installs keras, predicts mnist data, feels like GOD
"ok first we need to install tensorflow "
5 hours later i returned to the video haha
wait, it only took you 5 hours to get tensorflow installed?
how did u install it
@@samforsberg5698 gave up with pip and installed it with anaconda
@@yelmak only 5? bruh that shit took me 24. And I ended up using the same solution i had at the start, with out realizing that was the installation "facepalm"
What kind of problems did you guys have? For me it was quicker, i had the wrong python interpreter (32 instead of 64)
I learned more from this video than I did in an entire deep learning course I took last academic quarter. Huge thanks, my dude :^) this is going to help a ton for my thesis work!
What are you majoring in?
@@gil-evens Computer engineering!
@@AnnnEXE nice, what's your thesis about?
@@gil-evens quantum error correction! It was really really fun :)
This tutorial gave a hands on approach to machine learning unlike most of the other tutorials. Thank you very much.
Just a brilliant beginning. Finally the UA-cam notification led to something amazing
In later versions of tensorflow, you need to specify input shape of flatten layer if you are reusing the saved model.
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
instead of
model.add(tf.keras.layers.Flatten())
Yes, thanks
24th November, 2024
import tensorflow as tf
from keras import activations
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
model.add(tf.keras.layers.Dense(128, activation=activations.relu))
model.add(tf.keras.layers.Dense(128, activation=activations.relu))
model.add(tf.keras.layers.Dense(10, activation=activations.softmax))
Getting the error: "I'm too stupid and this seems way beyond my level of understanding".
Amazing ... waited a long time for this !!
hey bro im getting ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
For people getting error while loading the saved model, use the activation functions of keras, not of tensorflow and specify input_shape of the first layer. Following are the code changes -
from keras import activations
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
model.add(tf.keras.layers.Dense(128, activation=activations.relu))
model.add(tf.keras.layers.Dense(128, activation=activations.relu))
model.add(tf.keras.layers.Dense(10, activation=activations.softmax))
Thanks!
Thank god, I was following the old tutorials and these are so much easier.
Glad to hear these are easier to follow!
Thank you sooooo much Sentdex. I'm doing my thesis and I needed to learn how tensorflow works. Awesome tutorials here
Awesome, best wishes to you on your thesis!
Nice vid. So many videos ramble on side thoughts and spend 20 minutes explaining their hypothetical use case. This dives directly into it. Thank you!!!!
now this is the reason i subbed, have been learning python for the last month and can't wait to get deeper into deep learning
Man, I could kiss you. I've spent the better part of a week trying to find an up to date tutorial on keras where the person actually types the code out and explains what it does as he does, and this is exactly what I've been looking for. Very helpful!!
Hey Harrison, I just wanted to thank you for making these amazing tutorials! I finished your beginner tutorial, and while it went pretty deep into some things, I have a great toolkit that I can use to explore web development, data analysis, and cyber security! I love programming and your videos are some of the most helpful resources that I have been privileged to discover. I promise I will take this knowledge and apply it to something that will change the world! :D
ان شاء الله ❤️
I have listened the entire lecture while driving
The way coding part was done by reading it loud while typing was superb !
I had the whole picture even without looking at the screen.
Bundle of thanks
Why do I have to keep coming on youtube to simply learn something (for free) that I came to university for?
Degree
@@Jitesh-ek5xf Hardly worth anything these days
@@labreynth hmmm I guess so
Maybe because they don't explain it well or elaborate more on it? Since that's the case for me, they only teach us the math but not the actual code
I think it is the curriculum and the Degree
I am so happy you are doing an update on this topic right after i finished your main machine learning playlist. Your videos have been very helpful, thank you so much for your work !
17:50 Actually,
predictions = new_model.predict([x_test])
broke the program,
then I changed it to
predictions = new_model.predict(x_test)
And it worked Fine :)
Thanks
Same here, thank you! I was looking for an answer :D
same here too... thanks
Thank you :)
same
I have been following your videos from almost 18 months and its been amazing your tutorials have helped me alot. Thank you for these tutorials.
thank you for revisiting the basics . i just hope its a long comprehensive series
Best tutorial I've seen yet, not once did I stop and think "what is he talking about?" lol, thank you!!
This was excellent. I’m a novice in ml and this is easy to follow and understand. Please do more. I’ve gotten lost in other people’s tutorials and videos that I just lose motivation and interest because they don’t simplify concepts like you do. Keep it up! Also appreciate the use of iPython nb.
I will do my best to keep going in this style!
I just finished all 43 tutorials of your "Machine Learning with Python" series, when it came to Tensorflow, I thought need to watch another tensorflow2 series just to continue that series tutorials. But luckily you saved the day. I really love your tutorials, learning from the scatch. Love you man, have a blast.......😍😍😍
Hey Man , do we need to watch the old tensorflow series , or should this be enough ?
just asking coz the older series had 30 videos and this one has 11..
any significant thing I will be missing out on if I skip the older series?
@@shauryavardhan7225
The previous tutorials series has musch deeper explanation of how tensorflow works & can be implemented in Machine learning, its very easier to understand for a true beginner
but tensorflow 2.0 is musch easier to implement, so why learn 1.0, right!
so currently i am learning only the basics syntex of 2.0 from other youtube tutorilals, then i will move back to the previous "Machine Learning" to implement tensorflow.
Honestly i watched only 5 of this series & it feels kindda hard to catch up , means not so details here.
But you can give this series a try & see how it works for you.
I have one question. Why is his profile picture more pixelated every time?
I'm pretty confident you're the first person on youtube to mention it. I've been slowly changing it :D
i thought it is becouse of my slow internet :D, nice vid really appreciate you doin' those tutorials. And i really like your cups.
I just thaught it is funny little easter egg...
Good for you ;)
yes.
This video is awesome! period. I've literally seen so many seminars and videos on UA-cam... This is the first one that gave me all the comprehensive details, along while making the entire program... on the side ... Thanks.
It really helped.
11:17 *casually drinks from a shark!*
@@alamimouad mack the knife? :)
i know im a bit late but @sentdex you are a legend my guy, i got the e-book of nnfs and this video series also makes it even easier to make neural networks, no intro music that blasts your ears, straight to the point, thank you!
I had searched so much for this type of video!!!
Ufffffff!, Finally got it thanks you very much for this and eagerly waiting for next videos too 😊
hands down the best channel to learn deep learning from
Finally what I have been waiting for..
The fact that he Explained reasons behind every step is the most pleasing. Now I have a good understand of how Neural Network works. Thanks a lot sir.
I subscribed to your channel already
Keep the good job sir
Anyone experiencing this error: "ValueError: You are trying to load a weight file containing 3 layers into a model with 0 layers. ". Change this line:
model.add(tf.keras.layers.Flatten())
to:
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
Should resolve the problem.
how does this work? and thank you
You need to add the input_shape:
model.add(tf.keras.layers.Flatten(input_shape=x_train.shape[1:]))
Thanks Dude
Thanks, buddy, I was experiencing that error for quite a while
I was on the right track with the error.
Thanks for the help.
Thanks for this video! It has stood the test of time, and was a great intro even two years after its first upload.
I came across an issue when trying to print val_loss, val_acc it was complaining about the shape being 10000 and needed to be 60000 seems to work by normalising using x_train, x_test = x_train / 255.0, x_test / 255.0
Yeah, I had the same problem.
You have to comment:
#x_train = tf.keras.utils.normalize(x_train, axis=1)
#x_test = tf.keras.utils.normalize(x_train, axis=1)
and write
x_train, x_test = x_train / 255.0, x_test / 255.0
instead.
Thanks man!
@@kIocuchl2
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
The best intro by far. Thanks mate.
GREAT JOB! You explained it very well, thanks a lot. Thats exactly what I searched for! However, I think for beginners you should also mention the parameter y and that it is the label of the respective x.
One of the best videos ever I watched for a begineer.
Incredible tutorial, I am familiar with TF and Keras and this is super well explained. Covered everything a learner should know.
Something cool for next one is AutoML or Autokeras
I am totally beginner to deep learning, and really this is best explanation for making your first neural network model with TensorFlow, thank you very much for your great explanation.
How can I decide to pass 128 units like you did in adding 128. You said this is number of neurons. How can I decide the number of neurons according to the data I have.
Lovely lovely explaination! Lovely! What I have realized is: if you want to perfect python coding or any coding...you have got to practice!Practice!Practice!
Honestly, where do you learn all these stuff at such pace?
You dont even have a cs major background. Not sure if this is impressive or a sign of my own unproductivity :(
Regardless, I appreciate your existence
It is always nice when people appreciate your existence.
@@SJ23982398 speak for yourself
It's called passion.
books
This is absolutely amazing. Please continue this! Absolutely love your work
hey, i need help . am experiencing this error when i try to train(fit) the model.
ValueError: Data cardinality is ambiguous:
x sizes: 60000
y sizes: 10000
Please provide data which shares the same first dimension.
I had this same issue, but then noticed a copy and paste error when normalizing data. Make sure your x_test is normalized using x_test rather than x_train. When I updated my code to as show below, it worked
#normalize data
x_train = tf.keras.utils.normalize(x_train,axis=1)
x_test = tf.keras.utils.normalize(x_test,axis=1)
@@nickgerwe5302 ValueError: Shapes (32, 1) and (32, 10) are incompatible did you come across this error by any chance?
Thanks bro.... This is the best ‘getting in to Tensorflow’ video I found
I thought I am watching Edward Snowden teaching me how to code lmao.
😃😃
Same
Best TensorFlow hand-on introduction on UA-cam. (This is actually the first video of this kind I watched, but I don't think I need to watch another one.) BTW, watching this after watching 3Blue1Brown's series video on Deep Learning is a joy.
Please make a playlist for deep learning. From basics to advanced stuff. It will be super helpful.
Thank you so much! Came here after going through all the theory/maths behind the NN, and strongly needed python syntax to execute it.
I am getting error on executing the mode.save() line it is showing not implementation error I am using Google colaboratory to train the model
It seems to be a bug... I'm had the same problem. See the github bug report here: github.com/tensorflow/tensorflow/issues/22837 I down-graded to TF 1.10 and it solved the problem.
If you want to use the latest version you need to give the Flatten layer class the input variable input_shape like this: ` model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))`
@@Crabbpower thanks a lot man, i was getting crazy
does it matter if the parameter of input shape is more or less than the original pixel size. i think it matters do you know any way to overcome this
@@Crabbpower love ya man, saved me the hassle
Try import keras as keras instead of import tensorflow as tf.
then, change all the code tf.keras to just keras.
if you're not a native english speaker and totally noob, try to watch at 0.75 speed.
thank you so much for this tutorial. I have some idea for my thesis work, really hoping that this can work.
Do you think you could do a tutorial on reinforcement learning?
I would like to include that in this series for sure.
sentdex please do it, I've kinda been stuck at making cartpole work properly.
sentdex yeah! Deep Q would be great
cool
This is perfect. I started watching a tensor guide but its pretty out of date now that keras has been incorporated
1875 training set instead of 60000?
wow this was one of the first tutorials I ever used to build a neural network. crazy discovering this page again from your QLora video.
Getting the error: "I'm too stupid and this seems way beyond my level of understanding".
Thats no wonder with your profile picture
*suddenly the model works:* maybe I'm a genius after all
I think this is not the right place to enter the subject, he already assumes a lot. This is a place to consolidate your knowledge by implementing a demo. Sorry I can't give you a pointer to an introductory reference, but there are dozens of those.
Don't worry... Use Google colab.... You don't need to install tensorflow just import it
I use kaggle. you don't need to install tensorflow.
Thanks. When you run model.fit, @15:16 it shows 60000/60000. With current version (2.15.0) that displays the batch quantity. In my case, it shows 1875/1875, since the default batch size of 32 is used, 32 x 1875 = 60000. I assume when you ran this, the number referred to the quantity of items in the dataset.
You should make separate playlist/series for basic and advanced so we noobs can level in a more linear fashion. not a total beginner to advanced but like a more explaining on a whiteboard how things work series
Compared to this video, you want more basic, more advanced, or similar?
sentdex similar or advanced, I think we all love you because your videos are so deep and connecting they make difficult difficul'nt
sentdex I really enjoy your videos, but since I haven't worked with tensorflow or similar this video threw many new functions and commands at me :D Could you maybe write some comments next to what you've done? It makes things easier to follow during and later on :) Additionally, any good updated resource to Keras/tensorflow you can recommend?
sentdex I vote for more basic : )
more advanced please
You made it very simple for me, I was pretty lost before this video! Thank you man
i have this error in Jupyter notebook after installing and importing Tensorflow
The kernel appears to have died. It will restart automatically.
I had the same problem. Try conda install nomkl . Install nomkl package in the same environment you are running your jupter notebook in.
Thank you so much for this, I have started learning and done the “hello worlds” of keras and am really trying to get to some advanced stuff and hopefully eventually reinforcement learning
Could you also talk a little about static vs dynamic graphs in tensorflow and pytorch in one of the future videos. It would be a huge help
Thanks for the suggestion. I'll see about working that in somehow.
يسعد ربك من سورية عم بتابعك You are a good man I follow you from Syria
How can you have 60000 samples as input and 10000 as target? I get
"ValueError: Input arrays should have the same number of samples as target arrays. Found 60000 input samples and 10000 target samples."
You found out how to get past the error?
@@tingupingu3394 This code is very old, I recommend you to go somewhere else, don't remember the specific fix
@@tingupingu3394 from another comment here:
NonyaSerdo
3 meses atrás
I had the same problem. I accidentally forgot to change "x_train" to "x_test" in the second normalize statement. Correcting that gave me the expected results.
@@GeeVaaz Thank you. I wonder how this ever worked for him in the video.
Took a day to do this, but I thoroughly loved it. Thanks to his simple and interesting explanation.
I hv got an error " ValueError: The first layer in a Sequential model must get an `input_shape` argument. " on Jupyter notebook.
My tensor flow is 1.5.0 Version GPU.
Kindly advise solution.
getting this same error, did you figure this out?
Include an input_shape argument like this:
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
model.add(tf.keras.layers.Flatten(input_shape=x_train.shape[1:]))
Thanks sentdex, good start at least for me as a beginner. Only problem encountered was changing:
predictions = new_model.predict([x_test]) to predictions = new_model.predict(x_test)
as pointed out by Jamal Abo comment. Other then that it went smoothly.
BTW, as a reference for anyone, i'm using Windows 10 with Anaconda Python 3.7 64-Bit installed and TensorFlow and was able to complete tutorial to the end .
I have a question and didn't really remember if you covered it, why do you use Jupyter to code in for this tutorial? Or did you explained in another video/written tutorial?
So glad you're updating this. Especially with a higher level package.
Please little Pytorch tutorials also .... cause its Lit and easy to follow and more pythonic
simply love your channel. you have explained this far better than my professor.
8:07 NameError: name 'x_train' is not defined , what happen ? where's that image 7 i can get
mas bro run the first line before
I like this video, sentdex's explain is simple and easy to understand, and he is a passionate person!
Amazing ... waited a long time for this !!
No more waiting for you!
Eagerly waiting for the followup. This was very helpful
You shouldn't normalize the test data with respect to its own mean and std, as it is not supposed to be known in advance. The correct way to do is to normalize with the mean and the std of the training set.
At 14:26 binary crossentropy is totally not what you explained in the video... it's not because there are two classes (cats or dogs) that you use "binary" crossentropy as it might sound. Binary crossentropy works no matter how many classes you have, and the difference between it and categorical crossentropy is that in the former case the classes are not mutually exclusive, e.g. there could be a dog and a cat in the image at the same time, and the output probabilities are independent for each class, so you can have .99 dog and .98 cat; In contrast, the latter, categorical crossentropy, means that the classes are mutually exclusive, e.g. there can be only one kind of animal in an image, so for example .79 dog and .21 cat.
In mnist we use categorical crossentropy not because there are more than two classes but because there is only one kind of digit in one image.
Hey thanks for the comments.
With normalization, I've seen it done in quite a few ways. You're probably right that the most statistically sound method is to normalize with the same weights as the test set. I've found it to be fine if what you're predicting is large enough to normalize again, but this is just a toy problem, so just about whatever you do here will work.
With binary cross entropy, I guess I am just going to have to disagree... It's definitely used when you have just two classes.
For cross entropies, what documentation do you refer to?
In tensorflow binary and categorical crossentropies are implemented with
tf.nn.sigmoid_cross_entropy_with_logits and tf.nn.softmax_cross_entropy_with_logits, and these functions are called internally in keras as you can see in github.com/tensorflow/tensorflow/blob/r1.10/tensorflow/python/keras/backend.py
I suggest that you refer to the official documentation of these two functions to understand what they actually mean.
Actually this video is kind of bullshit. What i thought it wants to provide, some brief introduction to first steps in keras make only 1/10 of the vide without any significant explenation. But he explains other things that are actually wrong. This guy should first know about what he is talking before doing videos teaching others nonsense.
@@auseryt Hey, so why don't you make your own videos and share your awesomeness with us?
@@Cloudsorrow256 be thankful that someone tells you to not waste your time with this.
after the battle of making the layers myself with your guide in the first tutorial set the ease of using keras is just unfair lol
thanks for another great tutorial
Another great tutorial! Inspires me to make my own AI videos.
AI: "When you smile, you go from a 6 to an 8!"
Me: "You were trained on the mnist data set, weren't you?"
(True story)
whats the mnist data set ?
Do you mean 8! = 40320 or just 8
hi Sentdex, I think I remember that I ever downloaded your Tensorflow videos, maybe in a year. However, I cannot find it now. To tell the truth, I did not understand enough for those old videos. However, these series for Keras are far more understandable. I really appreciate your contribution!
I know this is an old video and I know you're using the convention of X being input and Y being output, but I think it is important to really explain that since for this particular example your data is a plot of x and y points in a 28x28 grid. I was slightly confused for a while until I printed the y values to see that they were the outputs and not the y-coordinate data.
This has helped me understand how deep learning using neural networks works. How to use tensorflow, keras & python libraries on jupyter.
*I gotta get that shark mug*
I used to be jealous on your programming skills. Now I am jealous on your programming skills and shark cup :) Great video!
IDLE: mr.Harrison i don't feel so good..
Hahah :)
Very good! Finally, I got no errors in my Jupyter notebook after running the codes. It let's me know that I better subscribe to your videos!
whenever i do the part of
predictions = new_model.predict([x_test])
i get the following error: 'list' object has no attribute 'shape'
i'm following the exact same steps as you did and i was carefull with any typos
what worked for me was removing the brackets around x_test:
predictions = new_model.predict(x_test)
Yea, it works after I removed [ ]. I am confused that Sentdex emphasizes that should put ([test]) not (test) in the video. No idea why his code doesn't work for my python environment....
@@SmartieEdits
for those who are on an AMD GPU
to install Tensorflow follow these steps:
1) create a virtual environment with python 3.6
2) pip install tensorflow-directml
3) write code as:
import tensorflow as tf
DML_VISIBLE_DEVICES="0"
#now your code here
Hey, I seem to get this error
ValueError: You are trying to load a weight file containing 3 layers into a model with 0 layers.
whenever I run the line of code below:
new_model = tf.keras.models.load_model('epic_num_reader.model')
Any hint of how to resolve this??
Same here. Keras has an issue opened on github
github.com/keras-team/keras/issues/9822
You might use this for tf version 1.9.0
model.add(tf.keras.layers.Flatten(input_shape=x_train.shape[1:]))
change the first line in the model to this : model.add(tf.keras.layers.Flatten(input_shape=x_train[0].shape))
Check this stackoverflow.com/questions/52664110/attributeerror-sequential-object-has-no-attribute-output-names/53980337#53980337
As per the changing to "keras" from "tensorflow.keras", just remove the "tensorflow."part and rest is the same
by far the best intro to tensorflow
Getting error while loading model in Google Colaboratory. [AttributeError: 'list' object has no attribute 'shape']
@Jostein Dyrseth Same issue here. In case i use just >>> predictions = new_model.predict(x_test);
I receive : "IndexError: list index out of range"
Any idea how to fix it ?
it was really clear and helpful honestly you save from confusion in this case
Yes! Tensorboard please!
model.save('epic_num_reader.model')
new_model = tf.keras.models.load_model('epic_num_reader.model')
this is my error:
W0706 13:50:15.307831 3688 hdf5_format.py:263] Sequential models without an `input_shape` passed to the first layer cannot reload their optimizer state. As a result, your model isstarting with a freshly initialized optimizer.
please help.
Same here
Try:
model.add(tf.keras.layers.Flatten( input_shape=(28,28) ) )
Yesssssss, I want moreee!!!
just this video alone deserves a subscriber
91 dislike????? i'm wondering whyy ??? it's just perfect (for biginner)
he doesn't explain anything
no. I am struggling with this..