Laurence, thank you so much for taking the time to put out such concise, intuitive walkthroughs. You manage to make everything going on behind the curtain really accessible and unintimidating!
In traditional programming, we infer answers after rules act on data, but in ML, we infer rules after answers act on data. Got that really straight.❤️❤️❤️❤️
You know those videos that you start watching and then get glued to them... :D Well done Laurence, in the first few seconds I wouldn't have bet on watching it
Was just waiting for this from Lawrence. I m learning machine learning daily and time to take this to next level. Thanks Lawrence and Google and tensor flow
Subscribed after watching this. Love the way you explain. You explain the concept very clearly and also you add a little bit of the code which gives me a great preparation for the coding application. Keep up the good work Lawrence
Im really glad tensorflow by itself is doing tutorial right now. Because i have this research project that implements machine learning and it helps me to learn and understand each lesson about it.
Laurence Moroney what a service to humanity that google is releasing tensorflow to the public domain. The benefit that will come out of this -and i don’t mean financial - is immeasurable. It’s like IBM releasing the paper on FFTs in the 60s !!
my procrastination has transcended to new levels I am watching this instead of studying for my 2 finals or working on my 4 remaining projects with less than 2 weeks left to finish all of those things lol
Hello, Laurence Moroney, Astounding presentation. How quickly and how brilliantly you put such a huge task look so simple. I must admire your ability. Keep up the good work, thumbs up here.
This is awesome mike the best explanations i have ever made on Machine Learning and i got a feel and beauty of nerual network when i heard your class , great job , keep posting like these cheers
@@LaurenceMoroney I didn't notice it was actually you sir. This function cannot recognise polynomials like square equations or cubic. I provided it with xs as 1.0, 2.0... and ys as their square, but it never got any better than a loss of 6.2222, and if I entered 10, it gave me a value of 36.67... ???
Wow , I’ve been waiting for such an opportunity to learn machine learning from an expert . Thank so much and keep it up , we need it for our big project GOD’s willing .
I love these tutorials and videos that Tensorflow puts out. Super informative. Thank you Laurence, what a great video! You bet I'll keep watching these series! Have a fantastic day everyone!😁👍
Amazing video. Though I do feel the need to say that playing scissors with the thumb out is sketchy and looks like you are trying to straddle the line between scissors and paper.
@@javiersuarez8415 Haha -- I haven't gotten around to filming a second season yet, but as they look like they're going to be popular, I should get moving on that... :)
The code is wrong. Not a good sign when the Hello World code from the official channel doesnt work. print(model.predict([10.0])) throws an error, you need to use something like print(model.predict(x=np.array([10.0])))
here's the code needed from the video, if anyone wants to try it out import tensorflow as tf from tensorflow import keras import numpy as np model=keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer="sgd", loss="mean_squared_error") xs=np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float) ys=np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float) model.fit(xs, ys, epochs=1100) print(model.predict([10.0])) I am an absolute beginner and wanted to run the code, but could only get errors at first. In case anyone needs this broken down, I added the first 3 lines that are necessary to run the tensorflow and keras libraries, installed previously via terminal.
Great video. You mention the small error is due to uncertainty due to the low sample size, is it not possible that the model simply descended to a not quite accurate relationship? Granted the cause would still be low sampling but the main question is if the error is explicitly programmed to reflect uncertainty because the input could still be 19 and be labeled uncertain.
Hi Lawrence, I am trying to implement same code with two inputs X1 and x2. I am finding difficulty in 1) how to specify x value like how the matrix of the two input should be. 2)what must be the input shape specified here. Could you please help with this.
I didn't understood what exactly is input shape and why it is 1? Because is accepts our input array by only 1 value at the time or there is other reason? Also I can't understand how and why NN with just 1 neuron produces 18.99 instead 19 because 1 neuron means that it can predict only exact value and any deviation is inposible?
Input shape is 1, because we just want to predict the result for 1 value input (i.e. 10). Neuron won't get *exact* value because it deals in probabilities, not certainties, so the prediction is a very high probability that the answer is 19, but when evaluating that as a number you get something close to 19
In the math example we get a NAN when typing in other x values in the array like 100. Do you know why? from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf import numpy as np x = [-1.0, 2.0,4.0,6.0,7.0, 100.0] y = [] x_test = [10] for i in x: y.append(i*2 +5) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1]) ]) optim='sgd' model.compile(optimizer=optim, loss='mean_squared_error') xs = np.array(x, dtype=float) ys = np.array(y, dtype=float) model.fit(xs, ys, epochs=500) print(model.predict(x_test))
Data should really be normalized when fed in for training, or the optimizer/loss won't work. We get away with it when we use small values, but that gets exposed at larger values. To do this you should normalize the training/test data and then retrain.
I have a question on something I don't understand: Dr. Moroney said that prediction is not perfect because the computer is trained for 6 values that form a straight line, but outside those 6 may be not straight (although it is highly probable that they are straight). I don't get this point: since it is a NN with only one neuron, so it has to be a straight line the prediction (it should be like a linear regression). Am I correct? Or did I interpret something wrong?
Here's the working code: from tensorflow import keras from keras.models import Sequential from keras.layers import Dense import numpy as np model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer='sgd', loss='mean_squared_error') x=np.array([-1.0,0.0,1.0,2.0,3.0,4.0], dtype=float) y=np.array([-3.0,-1.0,1.0,3.0,5.0,7.0], dtype=float) model.fit(xs,ys,epochs=500) x1=np.array([10.0], dtype=float) print(model.predict([x1]))
Hi, thank you for the great video! I have been playing around with your example using different sets of numbers. For example, I extended the first set from 1.0 to 12.0 and for the second set I used the days of the month(like 31, 28, 31, 30 etc). With a set of 12 pairs it worked fine. It managed to predict the 13th month as being 31 days. Then I wanted to be smarter and I extended the set of data to 24 pairs, basically repeating one year and I tried to predict the 25th and the 26th month length. The problem is that with 24 pairs of numbers the error keeps growing to infinite and the final result is NaN - I wonder why?
Yeah -- when you start using a lot of numbers the error rate grows, because the dataset isn't normalized. Try normalizing, then learning from the normalized values, and then denormalizing afterwards.
Running that code, I got an error: `ValueError: Unrecognized data type: x=[10.0] (of type )` -- fixed when I changed the predict() arg to `np.array([10.0])`
Excelente explicación. Muchas gracias pero Dónde están los demás videos? Podría gentilmente compartirlos intente buscar el curso en coursera pero no lo encuentro a usted y seria bueno que comparte este curso en coursera.
i have typical use case such as to predict the value of y for a given x but the logic for calculating the value of y ( ie in this case y=2x-1) is changing on daily basis. can tensor flow can predict this kind of data
Great video Laurence! For me the code you used failed "ValueError: Unrecognized data type: x=[10.0]". After changing the last line (print model predict) to this it worked: print(model.predict(tf.convert_to_tensor([10.0])))
Ok, you show some code that builds and trains the model before making a prediction. I found that on subsequent runs the accuracy increases, I realize that for some applications this can result in 'overfitting'. So once I am happy with the level of accuracy ,how can I apply the trained model without running the training (how/where is the model saved?)? Really love this course my head is working overtime in thinking of ways I want to try and apply it!
I've set me UA-cam-language to Chinese (as I'm studying the language) and noticed that I can no longer see the google-link to the code in this video's description field... That's kinda annoying (solved by opening the video in a private window and getting the link for the English description). Why leaving the link out of the Chinese language description? UA-cam's already blocked in China..
This man explain what machine learning is in the simplest way I ever heard. Good one, keep it up
Thanks Yoga!
Sooo trueeee!!!!
As someone who had just begun self learning programming, this explanation about machine learning is very clear and understandable. Thank you!
Great to hear Aysha! Thanks! :)
Best intro to machine learning I have seen. Thanks a lot Laurence
Laurence, you're just a genius. I have tried to understand that ML from many tutorials, but it's just from yours I really and simply understand.
Laurence, thank you so much for taking the time to put out such concise, intuitive walkthroughs. You manage to make everything going on behind the curtain really accessible and unintimidating!
Thanks! Glad you enjoyed! :)
In traditional programming, we infer answers after rules act on data, but in ML, we infer rules after answers act on data. Got that really straight.❤️❤️❤️❤️
Nice!
You know those videos that you start watching and then get glued to them... :D Well done Laurence, in the first few seconds I wouldn't have bet on watching it
THanks Jes!
@@laurencemoroney655 k
Was just waiting for this from Lawrence. I m learning machine learning daily and time to take this to next level. Thanks Lawrence and Google and tensor flow
Thanks, Shashank!
@shashank barki would you mind sharing how you are learning ML?
Subscribed after watching this. Love the way you explain. You explain the concept very clearly and also you add a little bit of the code which gives me a great preparation for the coding application. Keep up the good work Lawrence
Brilliant explanation Laurence.
THanks, Vishal
Laurence keep more videos coming:) Was a pleasure watching and learning
Working on it! :)
Im really glad tensorflow by itself is doing tutorial right now.
Because i have this research project that implements machine learning and it helps me to learn and understand each lesson about it.
That's great, thanks for letting us know! :)
This is one of the clearest explanations ever ! Great job!
Thanks!
Laurence Moroney what a service to humanity that google is releasing tensorflow to the public domain. The benefit that will come out of this -and i don’t mean financial - is immeasurable. It’s like IBM releasing the paper on FFTs in the 60s !!
Subscribed Sir Laurence! Thanks for the simple yet concise explanation in a short time.
Thank You for explaining this so clearly and eloquently.
my procrastination has transcended to new levels
I am watching this instead of studying for my 2 finals or working on my 4 remaining projects
with less than 2 weeks left to finish all of those things lol
Did you finish?
@@Intrinsion yea, only because my software development professor decided to make the final project optional
Oops! Sorry about that! :)
Thanks. You just spiked my interest in this course
Hello, Laurence Moroney,
Astounding presentation. How quickly and how brilliantly you put such a huge task look so simple. I must admire your ability. Keep up the good work, thumbs up here.
Thanks Fet!
This is awesome mike the best explanations i have ever made on Machine Learning and i got a feel and beauty of nerual network when i heard your class , great job , keep posting like these cheers
Thanks so much! :)
You're welcome kid.
@@LaurenceMoroney I didn't notice it was actually you sir. This function cannot recognise polynomials like square equations or cubic. I provided it with xs as 1.0, 2.0...
and ys as their square, but it never got any better than a loss of 6.2222, and if I entered 10, it gave me a value of 36.67...
???
Extremely helpful explanation, thank you very much!
That was a very good explanation, thank you!
this video is soo good❣️
I watched this many times to understand what ML is.
I studied Matlab at University, this video is also good for review of ML😊
Wow , I’ve been waiting for such an opportunity to learn machine learning from an expert . Thank so much and keep it up , we need it for our big project GOD’s willing .
I hope it works out :)
Thanks a lot
finally some video that makes digging into the topic understandable.
Thanks!
You're genius Laurence, for sure! Excellent demonstrations and brilliant examples.
Thanks!
I love these tutorials and videos that Tensorflow puts out. Super informative. Thank you Laurence, what a great video! You bet I'll keep watching these series! Have a fantastic day everyone!😁👍
Thanks! :)
...
..
Nj... N
this open for me new world
Precise and Concise. Thank you Lawrence!
Amazing video. Though I do feel the need to say that playing scissors with the thumb out is sketchy and looks like you are trying to straddle the line between scissors and paper.
It's almost paper like 60 % paper
@@badsanta7356 mnb 0:01
😮😮😅 nnnjnhj😮😢😅😊😊
Hey there Lawrence. Really good explanation. Thanks for putting together. Just wanted to ask how often these vids will come out?
www.coursera.org/instructor/lmoroney
Once per second
This series is 4 videos, coming out weekly
@@laurencemoroney655 4 is a small number. 😐. When is estimated second season release?
@@javiersuarez8415 Haha -- I haven't gotten around to filming a second season yet, but as they look like they're going to be popular, I should get moving on that... :)
Nicely done. Thank you so much for sharing this video with us,
The code is wrong. Not a good sign when the Hello World code from the official channel doesnt work.
print(model.predict([10.0])) throws an error, you need to use something like
print(model.predict(x=np.array([10.0])))
Loved the intro. Waiting for the next video.i was searching for such tutorial for long time, finally got one. Thanks tensor flow.
Welcome! Thanks for watching!
Great explanation. I am taking Lawrence's courses in ML / Tensorflow. Very useful. Thanks so much!
Thanks John!
here's the code needed from the video, if anyone wants to try it out
import tensorflow as tf
from tensorflow import keras
import numpy as np
model=keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer="sgd", loss="mean_squared_error")
xs=np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys=np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
model.fit(xs, ys, epochs=1100)
print(model.predict([10.0]))
I am an absolute beginner and wanted to run the code, but could only get errors at first.
In case anyone needs this broken down, I added the first 3 lines that are necessary to run the tensorflow and keras libraries, installed previously via terminal.
Very good explanation. Easy to understand. Continue the series
We are :)
It's so amazing explanation. Thanks a lot Lawrenece !
Thank you! :)
very good and simple lecture. thank you.
Great Introduction!
Rinse spin repeat.×3 or X4 to remove one situation. ......I can do this. Thanks for the patience ☺️ God sure made a blessing in you!
Nice explanation. I am also building a course on ML in Python (for a University) more from an implementation perspective. This surely helps!
I really loved the videos then liked all before watching cos i am sure I will watch all :D inshaallah :D Thanks for yoru effort!
Welcome!
Hey Lawrence,
its really a pleasure to learn from your videos. Waiting for more videos to come and take us deep into AI.
Thanks Rishish! :)
Great, more inquisitive on the subject
Wow such a great explanation with a simple example. Thanks.
Thanks! :)
This video's are literally making me feel fascinated to learn ML. You are definately life saver 🙏. Thanks a ton 👍
Very welcome! :)
Thanks for teaching this. You made this very easy
Great and simple video! Thank you!
Welcome! :)
please more of that its so good explained
Working on it! :)
Great video. You mention the small error is due to uncertainty due to the low sample size, is it not possible that the model simply descended to a not quite accurate relationship? Granted the cause would still be low sampling but the main question is if the error is explicitly programmed to reflect uncertainty because the input could still be 19 and be labeled uncertain.
You are very very good scientist. I thank you very much. I am from Jordan. I study master in computer and networks.
Thanks Raed!
Very good explanation thank you
Welcome! :)
Awesome master teacher Lawrence.. now i need autoML to learn ML
Haha, so do I! :)
Very good teacher thank you
thank u for this awesome video
You are welcome! :)
Hi Lawrence,
I am trying to implement same code with two inputs X1 and x2. I am finding difficulty in 1) how to specify x value like how the matrix of the two input should be. 2)what must be the input shape specified here. Could you please help with this.
Hinos
Videos like this > A $5,000 college course
Thanks. I like this way of teaching.
So nice , easy to understand , Thanks
Glad you like! :)
super explanation. you are a great teacher
Thanks Naduni!
Really good explanation!
can you give me the documentation, and if you would help me you con assist me to make it my final project
I didn't understood what exactly is input shape and why it is 1? Because is accepts our input array by only 1 value at the time or there is other reason? Also I can't understand how and why NN with just 1 neuron produces 18.99 instead 19 because 1 neuron means that it can predict only exact value and any deviation is inposible?
Input shape is 1, because we just want to predict the result for 1 value input (i.e. 10).
Neuron won't get *exact* value because it deals in probabilities, not certainties, so the prediction is a very high probability that the answer is 19, but when evaluating that as a number you get something close to 19
Great video! THANK YOU.
I've been trying to get to this point for a while.
Getting everything setup is a hurdle in itself. At least with OSX
Thank you very much
Awesome presentations skills.
Excellent explanation, thank you very much!
You're welcome, Erica!
In the math example we get a NAN when typing in other x values in the array like 100. Do you know why?
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
x = [-1.0, 2.0,4.0,6.0,7.0, 100.0]
y = []
x_test = [10]
for i in x:
y.append(i*2 +5)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
optim='sgd'
model.compile(optimizer=optim,
loss='mean_squared_error')
xs = np.array(x, dtype=float)
ys = np.array(y, dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict(x_test))
Data should really be normalized when fed in for training, or the optimizer/loss won't work. We get away with it when we use small values, but that gets exposed at larger values. To do this you should normalize the training/test data and then retrain.
Lawrence, great job man! thank you so much
Thanks! :)
Nice, clear explanations. This series is off to a good start. 😊 Looking forward to seeing more videos.
Thanks Bianca!
I feel like Neo : "I know kung fu 🥋! " . That was so concise !!! Thank you very much ...
haha! Thanks :)
Laurence will do that to you lol, amazing teacher
eagerly waiting for part-2
It's alredy out. Part 3 next week.
Rules + data vs. answers + data. Pretty good.
Thanks!
Thanks!
I have a question on something I don't understand: Dr. Moroney said that prediction is not perfect because the computer is trained for 6 values that form a straight line, but outside those 6 may be not straight (although it is highly probable that they are straight). I don't get this point: since it is a NN with only one neuron, so it has to be a straight line the prediction (it should be like a linear regression). Am I correct? Or did I interpret something wrong?
You missed 'tf.keras.' in the 1st line. So, model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) will be the correct code.
oops!
very nice way of teaching
I'm trying! :)
Excellent video! Thanks
My pleasure! :)
Here's the working code:
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
x=np.array([-1.0,0.0,1.0,2.0,3.0,4.0], dtype=float)
y=np.array([-3.0,-1.0,1.0,3.0,5.0,7.0], dtype=float)
model.fit(xs,ys,epochs=500)
x1=np.array([10.0], dtype=float)
print(model.predict([x1]))
Excited for this series!
Me too! :)
Hi, thank you for the great video! I have been playing around with your example using different sets of numbers. For example, I extended the first set from 1.0 to 12.0 and for the second set I used the days of the month(like 31, 28, 31, 30 etc). With a set of 12 pairs it worked fine. It managed to predict the 13th month as being 31 days. Then I wanted to be smarter and I extended the set of data to 24 pairs, basically repeating one year and I tried to predict the 25th and the 26th month length. The problem is that with 24 pairs of numbers the error keeps growing to infinite and the final result is NaN - I wonder why?
Yeah -- when you start using a lot of numbers the error rate grows, because the dataset isn't normalized.
Try normalizing, then learning from the normalized values, and then denormalizing afterwards.
@@laurencemoroney655 Thank you very much for your kind answer!
@@laurencemoroney655 Hi, you mean normalizing the data? how to do that? is that by adding normlization layer?
clear explanation thank you so much!
THanks!
🌟 🌟 🌟 🌟 🌟 Wow! What a very clear and straight forward explination! Thank you!
Thank you, Thunderjaw!
Nice explanation. Thank you.
Thanks! :)
Running that code, I got an error: `ValueError: Unrecognized data type: x=[10.0] (of type )` -- fixed when I changed the predict() arg to `np.array([10.0])`
Do print(model.predict(np.array([10.0], dtype=float))) instead
Bom tutorial, aguardando continuação. Like from Brazil hu3br
Thank you! :)
This is briliant !!!
Thanks Jaspreet!
I love you guys!!
Thanks!
wow, ever since AI ML got my attention i have been looking for something like this, thanks @lmoroney for bringing this to us.
Welcome! Glad you enjoyed! :)
Fantastic video thank you
Thanks, James!
Perfect explanation
Thanks!
Excelente explicación. Muchas gracias pero Dónde están los demás videos? Podría gentilmente compartirlos intente buscar el curso en coursera pero no lo encuentro a usted y seria bueno que comparte este curso en coursera.
Coursera Course: www.coursera.org/learn/introduction-tensorflow/
Rest of the videos will be published on this channel
Ohh thanks very much for the information. Best regards.
Awesome video. Thank you!
Thanks Shaji!
Thank you so much. You save my life.
Welcome! :)
i have typical use case such as to predict the value of y for a given x but the logic for calculating the value of y ( ie in this case y=2x-1) is changing on daily basis. can tensor flow can predict this kind of data
If data changes, model should be retrained
Great video Laurence! For me the code you used failed "ValueError: Unrecognized data type: x=[10.0]". After changing the last line (print model predict) to this it worked: print(model.predict(tf.convert_to_tensor([10.0])))
giving xs and ys are array but as an input u are using a list '10.0' so its error, u can also try : predict(np.array([10.0])))
Gracias, thank you, danke, merci
Ok, you show some code that builds and trains the model before making a prediction. I found that on subsequent runs the accuracy increases, I realize that for some applications this can result in 'overfitting'. So once I am happy with the level of accuracy ,how can I apply the trained model without running the training (how/where is the model saved?)? Really love this course my head is working overtime in thinking of ways I want to try and apply it!
Plz be regular and consistent. 😊
Looks like a job for normalization 😉
Trying!
@@laurencemoroney655 thanks.. Loved ur Videos too knowledgable
I've set me UA-cam-language to Chinese (as I'm studying the language) and noticed that I can no longer see the google-link to the code in this video's description field... That's kinda annoying (solved by opening the video in a private window and getting the link for the English description). Why leaving the link out of the Chinese language description? UA-cam's already blocked in China..
I don't get that, when I set the language to Chinese I still see the same description. Hmmm...
我也同样看不到连接的代码段,我选择自己输入代码了
Please make a series on audio data loading n analysis using tensorflow