Laurence, thank you so much for taking the time to put out such concise, intuitive walkthroughs. You manage to make everything going on behind the curtain really accessible and unintimidating!
In traditional programming, we infer answers after rules act on data, but in ML, we infer rules after answers act on data. Got that really straight.❤️❤️❤️❤️
Was just waiting for this from Lawrence. I m learning machine learning daily and time to take this to next level. Thanks Lawrence and Google and tensor flow
my procrastination has transcended to new levels I am watching this instead of studying for my 2 finals or working on my 4 remaining projects with less than 2 weeks left to finish all of those things lol
Subscribed after watching this. Love the way you explain. You explain the concept very clearly and also you add a little bit of the code which gives me a great preparation for the coding application. Keep up the good work Lawrence
You know those videos that you start watching and then get glued to them... :D Well done Laurence, in the first few seconds I wouldn't have bet on watching it
@@javiersuarez8415 Haha -- I haven't gotten around to filming a second season yet, but as they look like they're going to be popular, I should get moving on that... :)
Amazing video. Though I do feel the need to say that playing scissors with the thumb out is sketchy and looks like you are trying to straddle the line between scissors and paper.
Hello, Laurence Moroney, Astounding presentation. How quickly and how brilliantly you put such a huge task look so simple. I must admire your ability. Keep up the good work, thumbs up here.
The code is wrong. Not a good sign when the Hello World code from the official channel doesnt work. print(model.predict([10.0])) throws an error, you need to use something like print(model.predict(x=np.array([10.0])))
Im really glad tensorflow by itself is doing tutorial right now. Because i have this research project that implements machine learning and it helps me to learn and understand each lesson about it.
Laurence Moroney what a service to humanity that google is releasing tensorflow to the public domain. The benefit that will come out of this -and i don’t mean financial - is immeasurable. It’s like IBM releasing the paper on FFTs in the 60s !!
I love these tutorials and videos that Tensorflow puts out. Super informative. Thank you Laurence, what a great video! You bet I'll keep watching these series! Have a fantastic day everyone!😁👍
here's the code needed from the video, if anyone wants to try it out import tensorflow as tf from tensorflow import keras import numpy as np model=keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer="sgd", loss="mean_squared_error") xs=np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float) ys=np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float) model.fit(xs, ys, epochs=1100) print(model.predict([10.0])) I am an absolute beginner and wanted to run the code, but could only get errors at first. In case anyone needs this broken down, I added the first 3 lines that are necessary to run the tensorflow and keras libraries, installed previously via terminal.
Great video Laurence! For me the code you used failed "ValueError: Unrecognized data type: x=[10.0]". After changing the last line (print model predict) to this it worked: print(model.predict(tf.convert_to_tensor([10.0])))
Wow , I’ve been waiting for such an opportunity to learn machine learning from an expert . Thank so much and keep it up , we need it for our big project GOD’s willing .
This is awesome mike the best explanations i have ever made on Machine Learning and i got a feel and beauty of nerual network when i heard your class , great job , keep posting like these cheers
@@LaurenceMoroney I didn't notice it was actually you sir. This function cannot recognise polynomials like square equations or cubic. I provided it with xs as 1.0, 2.0... and ys as their square, but it never got any better than a loss of 6.2222, and if I entered 10, it gave me a value of 36.67... ???
Great video. You mention the small error is due to uncertainty due to the low sample size, is it not possible that the model simply descended to a not quite accurate relationship? Granted the cause would still be low sampling but the main question is if the error is explicitly programmed to reflect uncertainty because the input could still be 19 and be labeled uncertain.
Here's the working code: from tensorflow import keras from keras.models import Sequential from keras.layers import Dense import numpy as np model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer='sgd', loss='mean_squared_error') x=np.array([-1.0,0.0,1.0,2.0,3.0,4.0], dtype=float) y=np.array([-3.0,-1.0,1.0,3.0,5.0,7.0], dtype=float) model.fit(xs,ys,epochs=500) x1=np.array([10.0], dtype=float) print(model.predict([x1]))
Ok, you show some code that builds and trains the model before making a prediction. I found that on subsequent runs the accuracy increases, I realize that for some applications this can result in 'overfitting'. So once I am happy with the level of accuracy ,how can I apply the trained model without running the training (how/where is the model saved?)? Really love this course my head is working overtime in thinking of ways I want to try and apply it!
In the math example we get a NAN when typing in other x values in the array like 100. Do you know why? from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf import numpy as np x = [-1.0, 2.0,4.0,6.0,7.0, 100.0] y = [] x_test = [10] for i in x: y.append(i*2 +5) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1]) ]) optim='sgd' model.compile(optimizer=optim, loss='mean_squared_error') xs = np.array(x, dtype=float) ys = np.array(y, dtype=float) model.fit(xs, ys, epochs=500) print(model.predict(x_test))
Data should really be normalized when fed in for training, or the optimizer/loss won't work. We get away with it when we use small values, but that gets exposed at larger values. To do this you should normalize the training/test data and then retrain.
I have a question on something I don't understand: Dr. Moroney said that prediction is not perfect because the computer is trained for 6 values that form a straight line, but outside those 6 may be not straight (although it is highly probable that they are straight). I don't get this point: since it is a NN with only one neuron, so it has to be a straight line the prediction (it should be like a linear regression). Am I correct? Or did I interpret something wrong?
I didn't understood what exactly is input shape and why it is 1? Because is accepts our input array by only 1 value at the time or there is other reason? Also I can't understand how and why NN with just 1 neuron produces 18.99 instead 19 because 1 neuron means that it can predict only exact value and any deviation is inposible?
Input shape is 1, because we just want to predict the result for 1 value input (i.e. 10). Neuron won't get *exact* value because it deals in probabilities, not certainties, so the prediction is a very high probability that the answer is 19, but when evaluating that as a number you get something close to 19
Hi Lawrence, I am trying to implement same code with two inputs X1 and x2. I am finding difficulty in 1) how to specify x value like how the matrix of the two input should be. 2)what must be the input shape specified here. Could you please help with this.
Sir is it important to study complete process of all machine learning algorithms or it's just enough to know the application of each algorithm . please tell
This man explain what machine learning is in the simplest way I ever heard. Good one, keep it up
Thanks Yoga!
Sooo trueeee!!!!
Laurence, thank you so much for taking the time to put out such concise, intuitive walkthroughs. You manage to make everything going on behind the curtain really accessible and unintimidating!
Thanks! Glad you enjoyed! :)
As someone who had just begun self learning programming, this explanation about machine learning is very clear and understandable. Thank you!
Great to hear Aysha! Thanks! :)
In traditional programming, we infer answers after rules act on data, but in ML, we infer rules after answers act on data. Got that really straight.❤️❤️❤️❤️
Nice!
Laurence, you're just a genius. I have tried to understand that ML from many tutorials, but it's just from yours I really and simply understand.
Was just waiting for this from Lawrence. I m learning machine learning daily and time to take this to next level. Thanks Lawrence and Google and tensor flow
Thanks, Shashank!
@shashank barki would you mind sharing how you are learning ML?
my procrastination has transcended to new levels
I am watching this instead of studying for my 2 finals or working on my 4 remaining projects
with less than 2 weeks left to finish all of those things lol
Did you finish?
@@Intrinsion yea, only because my software development professor decided to make the final project optional
Oops! Sorry about that! :)
Subscribed after watching this. Love the way you explain. You explain the concept very clearly and also you add a little bit of the code which gives me a great preparation for the coding application. Keep up the good work Lawrence
Laurence keep more videos coming:) Was a pleasure watching and learning
Working on it! :)
You know those videos that you start watching and then get glued to them... :D Well done Laurence, in the first few seconds I wouldn't have bet on watching it
THanks Jes!
@@laurencemoroney655 k
Hey there Lawrence. Really good explanation. Thanks for putting together. Just wanted to ask how often these vids will come out?
www.coursera.org/instructor/lmoroney
Once per second
This series is 4 videos, coming out weekly
@@laurencemoroney655 4 is a small number. 😐. When is estimated second season release?
@@javiersuarez8415 Haha -- I haven't gotten around to filming a second season yet, but as they look like they're going to be popular, I should get moving on that... :)
Subscribed Sir Laurence! Thanks for the simple yet concise explanation in a short time.
Amazing video. Though I do feel the need to say that playing scissors with the thumb out is sketchy and looks like you are trying to straddle the line between scissors and paper.
It's almost paper like 60 % paper
@@badsanta7356 mnb 0:01
😮😮😅 nnnjnhj😮😢😅😊😊
Thank You for explaining this so clearly and eloquently.
Brilliant explanation Laurence.
THanks, Vishal
Hello, Laurence Moroney,
Astounding presentation. How quickly and how brilliantly you put such a huge task look so simple. I must admire your ability. Keep up the good work, thumbs up here.
Thanks Fet!
Nice explanation. I am also building a course on ML in Python (for a University) more from an implementation perspective. This surely helps!
The code is wrong. Not a good sign when the Hello World code from the official channel doesnt work.
print(model.predict([10.0])) throws an error, you need to use something like
print(model.predict(x=np.array([10.0])))
this video is soo good❣️
I watched this many times to understand what ML is.
I studied Matlab at University, this video is also good for review of ML😊
Im really glad tensorflow by itself is doing tutorial right now.
Because i have this research project that implements machine learning and it helps me to learn and understand each lesson about it.
That's great, thanks for letting us know! :)
You're genius Laurence, for sure! Excellent demonstrations and brilliant examples.
Thanks!
Extremely helpful explanation, thank you very much!
This is one of the clearest explanations ever ! Great job!
Thanks!
Laurence Moroney what a service to humanity that google is releasing tensorflow to the public domain. The benefit that will come out of this -and i don’t mean financial - is immeasurable. It’s like IBM releasing the paper on FFTs in the 60s !!
That was a very good explanation, thank you!
Nicely done. Thank you so much for sharing this video with us,
I love these tutorials and videos that Tensorflow puts out. Super informative. Thank you Laurence, what a great video! You bet I'll keep watching these series! Have a fantastic day everyone!😁👍
Thanks! :)
...
..
Nj... N
Rinse spin repeat.×3 or X4 to remove one situation. ......I can do this. Thanks for the patience ☺️ God sure made a blessing in you!
Great video! THANK YOU.
I've been trying to get to this point for a while.
Getting everything setup is a hurdle in itself. At least with OSX
Thanks. You just spiked my interest in this course
finally some video that makes digging into the topic understandable.
Thanks!
can you give me the documentation, and if you would help me you con assist me to make it my final project
here's the code needed from the video, if anyone wants to try it out
import tensorflow as tf
from tensorflow import keras
import numpy as np
model=keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer="sgd", loss="mean_squared_error")
xs=np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys=np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
model.fit(xs, ys, epochs=1100)
print(model.predict([10.0]))
I am an absolute beginner and wanted to run the code, but could only get errors at first.
In case anyone needs this broken down, I added the first 3 lines that are necessary to run the tensorflow and keras libraries, installed previously via terminal.
Great video Laurence! For me the code you used failed "ValueError: Unrecognized data type: x=[10.0]". After changing the last line (print model predict) to this it worked: print(model.predict(tf.convert_to_tensor([10.0])))
giving xs and ys are array but as an input u are using a list '10.0' so its error, u can also try : predict(np.array([10.0])))
Wow , I’ve been waiting for such an opportunity to learn machine learning from an expert . Thank so much and keep it up , we need it for our big project GOD’s willing .
I hope it works out :)
Thanks a lot
Hey Lawrence,
its really a pleasure to learn from your videos. Waiting for more videos to come and take us deep into AI.
Thanks Rishish! :)
Loved the intro. Waiting for the next video.i was searching for such tutorial for long time, finally got one. Thanks tensor flow.
Welcome! Thanks for watching!
Precise and Concise. Thank you Lawrence!
This is awesome mike the best explanations i have ever made on Machine Learning and i got a feel and beauty of nerual network when i heard your class , great job , keep posting like these cheers
Thanks so much! :)
You're welcome kid.
@@LaurenceMoroney I didn't notice it was actually you sir. This function cannot recognise polynomials like square equations or cubic. I provided it with xs as 1.0, 2.0...
and ys as their square, but it never got any better than a loss of 6.2222, and if I entered 10, it gave me a value of 36.67...
???
This video's are literally making me feel fascinated to learn ML. You are definately life saver 🙏. Thanks a ton 👍
Very welcome! :)
You missed 'tf.keras.' in the 1st line. So, model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) will be the correct code.
oops!
Great explanation. I am taking Lawrence's courses in ML / Tensorflow. Very useful. Thanks so much!
Thanks John!
Great, more inquisitive on the subject
It's so amazing explanation. Thanks a lot Lawrenece !
Thank you! :)
I feel like Neo : "I know kung fu 🥋! " . That was so concise !!! Thank you very much ...
haha! Thanks :)
Laurence will do that to you lol, amazing teacher
this open for me new world
Great video. You mention the small error is due to uncertainty due to the low sample size, is it not possible that the model simply descended to a not quite accurate relationship? Granted the cause would still be low sampling but the main question is if the error is explicitly programmed to reflect uncertainty because the input could still be 19 and be labeled uncertain.
Great and simple video! Thank you!
Welcome! :)
please more of that its so good explained
Working on it! :)
Thanks for teaching this. You made this very easy
Very good explanation. Easy to understand. Continue the series
We are :)
Gracias, thank you, danke, merci
Awesome master teacher Lawrence.. now i need autoML to learn ML
Haha, so do I! :)
You are very very good scientist. I thank you very much. I am from Jordan. I study master in computer and networks.
Thanks Raed!
Wow such a great explanation with a simple example. Thanks.
Thanks! :)
Thank you very much
very good and simple lecture. thank you.
Here's the working code:
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
x=np.array([-1.0,0.0,1.0,2.0,3.0,4.0], dtype=float)
y=np.array([-3.0,-1.0,1.0,3.0,5.0,7.0], dtype=float)
model.fit(xs,ys,epochs=500)
x1=np.array([10.0], dtype=float)
print(model.predict([x1]))
Very good teacher thank you
thank u for this awesome video
You are welcome! :)
Nice, clear explanations. This series is off to a good start. 😊 Looking forward to seeing more videos.
Thanks Bianca!
Very good explanation thank you
Welcome! :)
Thanks. I like this way of teaching.
3:30
-So if you saw it, how did you get that?
-Idk
**proceeds to explain exactly how I got that**
Finally none Indian teacher, Thanks
Great Introduction!
Rules + data vs. answers + data. Pretty good.
Thanks!
Thanks!
Bom tutorial, aguardando continuação. Like from Brazil hu3br
Thank you! :)
Videos like this > A $5,000 college course
Excellent explanation, thank you very much!
You're welcome, Erica!
Really good explanation!
Ok, you show some code that builds and trains the model before making a prediction. I found that on subsequent runs the accuracy increases, I realize that for some applications this can result in 'overfitting'. So once I am happy with the level of accuracy ,how can I apply the trained model without running the training (how/where is the model saved?)? Really love this course my head is working overtime in thinking of ways I want to try and apply it!
So nice , easy to understand , Thanks
Glad you like! :)
In the math example we get a NAN when typing in other x values in the array like 100. Do you know why?
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
x = [-1.0, 2.0,4.0,6.0,7.0, 100.0]
y = []
x_test = [10]
for i in x:
y.append(i*2 +5)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=1, input_shape=[1])
])
optim='sgd'
model.compile(optimizer=optim,
loss='mean_squared_error')
xs = np.array(x, dtype=float)
ys = np.array(y, dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict(x_test))
Data should really be normalized when fed in for training, or the optimizer/loss won't work. We get away with it when we use small values, but that gets exposed at larger values. To do this you should normalize the training/test data and then retrain.
I really appreciate you Sir
And I you! :)
eagerly waiting for part-2
It's alredy out. Part 3 next week.
Good explanation, thanks.
Lawrence, great job man! thank you so much
Thanks! :)
I really loved the videos then liked all before watching cos i am sure I will watch all :D inshaallah :D Thanks for yoru effort!
Welcome!
I have a question on something I don't understand: Dr. Moroney said that prediction is not perfect because the computer is trained for 6 values that form a straight line, but outside those 6 may be not straight (although it is highly probable that they are straight). I don't get this point: since it is a NN with only one neuron, so it has to be a straight line the prediction (it should be like a linear regression). Am I correct? Or did I interpret something wrong?
super explanation. you are a great teacher
Thanks Naduni!
clear explanation thank you so much!
THanks!
🌟 🌟 🌟 🌟 🌟 Wow! What a very clear and straight forward explination! Thank you!
Thank you, Thunderjaw!
I think this is a re-launch, I hope more videos to come and hopefully in Tensorflow 2.
Not a relaunch. Just keeping up the rhythm of videos based on people's demands
very nice way of teaching
I'm trying! :)
Awesome presentations skills.
print(model.predict(np.array([[10.0]])))
Please make a series on audio data loading n analysis using tensorflow
I didn't understood what exactly is input shape and why it is 1? Because is accepts our input array by only 1 value at the time or there is other reason? Also I can't understand how and why NN with just 1 neuron produces 18.99 instead 19 because 1 neuron means that it can predict only exact value and any deviation is inposible?
Input shape is 1, because we just want to predict the result for 1 value input (i.e. 10).
Neuron won't get *exact* value because it deals in probabilities, not certainties, so the prediction is a very high probability that the answer is 19, but when evaluating that as a number you get something close to 19
Excellent video! Thanks
My pleasure! :)
Perfect explanation
Thanks!
Hi Lawrence,
I am trying to implement same code with two inputs X1 and x2. I am finding difficulty in 1) how to specify x value like how the matrix of the two input should be. 2)what must be the input shape specified here. Could you please help with this.
Hinos
Thank you so much. You save my life.
Welcome! :)
Awesome video. Thank you!
Thanks Shaji!
Excellent. When does No.2 arrive?!
Weekly
wow, ever since AI ML got my attention i have been looking for something like this, thanks @lmoroney for bringing this to us.
Welcome! Glad you enjoyed! :)
Plz be regular and consistent. 😊
Looks like a job for normalization 😉
Trying!
@@laurencemoroney655 thanks.. Loved ur Videos too knowledgable
thanks for the good explanation
Welcome!
Still waiting to see what TensorFlow can give out.
grate intro of ML i really like this video thanks google and google team
Thanks!
Sir is it important to study complete process of all machine learning algorithms or it's just enough to know the application of each algorithm . please tell