Originlly, I thought you were extremely skilled in writing backwards, but then I realized that you can film it backwards, and then flip the video. Good effect. One giveaway is that the video shows you using your left hand to write, but thumbnail images show you using your right hand.
@@untitled746 I did notice one time when I was young that if you put your left hand out the door of a glasshouse and write on the outside with your finger on the dampness it's easy to write in a manner that reads properly inside the glasshouse. Way easier than if you try and write on paper with your left hand. Chirality? Luv and Peace.
This presenter, Martin Keen, embodies Einstein's quote "If you can't explain it simply, you don't understand it well enough." Keen certainly understands AI and is a master at explaining things simply.
Interested in applying ANN to generate synthetic data to feed and calibrate an options pricing model which incorporates stochastic volatility as a project so thank you for this brief low-level introduction video to ANN's.
Hello, I am doing a similar ANN however have run into some problems regarding dropout regularisation. I have also reasearched a bit about neural networks for my project. Do you think we could contact and share some ideas?
Hi I have a question regarding the threshold used in the equation to calculate the yhat value. What is a threshold and why did you choose 3 specifically? Is it related to the number of factors taken into consideration? Thanks
I believe if you can hook up to a monitor, it will appear bigger. The easiest I've found is to actually run an HDMI from my laptop to a TV, but with modern features, a phone or laptop can screencast to a smart tv.
yes, it's possible, but it depends on how you make your architecture. if instead of all positive values, you have a datatype that can be represented by both positive and negative values, it may be more useful to use negative values as well. For example if you have a conversation AI that represents happy words or notions with positive values and negative words or notions with negative values, it could prove useful to have some negatively weighted neurons that may result in a negative number represented output.
So… if you trained your language prediction model on, say, academic libraries instead of Twitter you might get a more reliable tool? Like a medical assistant trained purely on peer-reviewed medical libraries? Is anyone doing that?
Most Aussie/New Zealand thing I think I have ever witnessed is weighing the quality of the waves heavier than whether or not there are sharks out there...
his writing was poor (same as me) and I assume a simple mirror/flip superimposition was used. Very effective but I too was distracted by this simple effect.
But you didn't really explain what the nodes do. You explained the progression from input to hidden to output, and then you showed us how an algorithm works, but I didn't gain any understanding of the individual nodes, how they interact, and what they do. Did I just miss it?
Thanks for the question. Here are a couple links that may be of use to you. developer.ibm.com/articles/l-neural/ www.ibm.com/products/spss-neural-networks
3blue1brown has a very good series on neural networks. The neural network system they show is primitive and they have been improved over the decades, but it is a good primer to understanding the basic ideas.
Randomly I believe. You randomly select weight and bias values and through training, the model selects the optimal values using the cost function to minimize the errors.
Ok hold on, So your saying if the Neural Network searches the entire internet and there has not been any shark attacks then it would be safe to go swimming?
Good, but some parts were very poorly explained and rushed. You can't just say "we leverage supervised learning on labelled datasets" without explaining and expect people to understand 🤣
This is incredible. Any civil being able to reach this kind of information in just minutes is indescribable, priceless.
Thanks. Glad you enjoyed the content.
Originlly, I thought you were extremely skilled in writing backwards, but then I realized that you can film it backwards, and then flip the video. Good effect.
One giveaway is that the video shows you using your left hand to write, but thumbnail images show you using your right hand.
That's great John. How great of you to point that out. Ya figured it out!
Well done catching that. You must have a huge neural network in your noggin :D
@@untitled746 I did notice one time when I was young that if you put your left hand out the door of a glasshouse and write on the outside with your finger on the dampness it's easy to write in a manner that reads properly inside the glasshouse.
Way easier than if you try and write on paper with your left hand.
Chirality?
Luv and Peace.
lol was wondering about that exact thing
Thanks IBM for this series of videos. It's been very useful.
I, as a neural network, look at this video and think, 'Yes, this is what we've always talked about!
Why do you enjoy lying to people on the internet?
Thanks for this video. You held my attention for the full duration!
Thank you so much for this ... the regression example has really helped me understand how decisions are made in AI
My worlds have collided. Martin is now helping me on my AI journey AND providing me with interesting information on beer brewing experiments. Awesome!
This presenter, Martin Keen, embodies Einstein's quote "If you can't explain it simply, you don't understand it well enough." Keen certainly understands AI and is a master at explaining things simply.
Interested in applying ANN to generate synthetic data to feed and calibrate an options pricing model which incorporates stochastic volatility as a project so thank you for this brief low-level introduction video to ANN's.
Hello, I am doing a similar ANN however have run into some problems regarding dropout regularisation. I have also reasearched a bit about neural networks for my project. Do you think we could contact and share some ideas?
Thank you .. This is really great and explained the actual nuances very clearly to understand
Writing from back side showcases an evidence of his talent..
Only 45 comments under MIT video is a sin. I need to get addicted to this staff.
I think iam going to use that formula for making decision on my daily activity
Sharks you said? Sharks are always a 5. But yeah otherwise good quick intro.
This is incredible.
Thanks that was truly helpful for new starters
Good explanation, and yeah i have subcribed, thanks
absolutely fun to learn from you, big thank you!
Very well done.
thank you
Thanks
incredible bro
simple and precise Thanks
A lot was hidden behind 'cost function' and 'gradient descent' which left me feeling like the kernel of understanding was incomplete.
How do neural networks help computers recognize patterns?
I just learned about artificial brains (pretty much) in five minutes. Wow.
I would have thought that the presence of sharks were probably a bigger consideration than the quality of the waves but perhaps that's just me...
amazing
I normally watch this guy do homebrew videos and now my mind is now blown
Hi I have a question regarding the threshold used in the equation to calculate the yhat value. What is a threshold and why did you choose 3 specifically? Is it related to the number of factors taken into consideration? Thanks
In the surfer example, how did you select "3" as the threshold value?
That's how many itsems are being measured. There are 3 x values with corresponding weights.
Thanks sir
This is great! But can you write bigger so we can read it too?
I believe if you can hook up to a monitor, it will appear bigger. The easiest I've found is to actually run an HDMI from my laptop to a TV, but with modern features, a phone or laptop can screencast to a smart tv.
Can you recommend any books about the topic "Artificial Neural Networks" for beginners ?
is the example of wind surfing based on a perceptron or on a more complex neural network?
Are those "neurons" simple chips that transfer and process the information?
Very good sgort video. But your voice occasionally drops to inaudible level.
Is it possible to take individuals in to training and test samples instead of observations when training the ML models?
Interesting! But what for? Any examples?... M
In neural work, can weights be negative values?
yes, it's possible, but it depends on how you make your architecture.
if instead of all positive values, you have a datatype that can be represented by both positive and negative values, it may be more useful to use negative values as well.
For example if you have a conversation AI that represents happy words or notions with positive values and negative words or notions with negative values, it could prove useful to have some negatively weighted neurons that may result in a negative number represented output.
Lack of sharks is definitely more important than the waves in my (non-surfing) opinion.
So… if you trained your language prediction model on, say, academic libraries instead of Twitter you might get a more reliable tool?
Like a medical assistant trained purely on peer-reviewed medical libraries? Is anyone doing that?
Is IBM still a thing?
why is my beer guy on this side of the algorithm???
Most Aussie/New Zealand thing I think I have ever witnessed is weighing the quality of the waves heavier than whether or not there are sharks out there...
Bro make it six things. Activation functions are important and you haven't mentioned them.
Spiking neural networks do not have activation function. Spiking neuron has update-function instead which calculates its state at time t.
I dont understand what the threshhod value is referring to for example the 3
How its connected to graphs
Ooooooh my God it's the homebrew guy
I was far too distracted thinking about the cons of the speaker having to write backwards... Looks cool, but is mostly illegible. :)
his writing was poor (same as me) and I assume a simple mirror/flip superimposition was used. Very effective but I too was distracted by this simple effect.
It would take years of hard training for me to be able to write backwards as the IBM dude
See ibm.biz/write-backwards
But you didn't really explain what the nodes do. You explained the progression from input to hidden to output, and then you showed us how an algorithm works, but I didn't gain any understanding of the individual nodes, how they interact, and what they do. Did I just miss it?
Perhaps you should consider software engineering school
what is a longap?
isnt the threshold 5 ?
Where to know more about this. Any web link or course?
Thanks for the question. Here are a couple links that may be of use to you. developer.ibm.com/articles/l-neural/ www.ibm.com/products/spss-neural-networks
3blue1brown has a very good series on neural networks. The neural network system they show is primitive and they have been improved over the decades, but it is a good primer to understanding the basic ideas.
Thank you !
How did you come up with -3 as the threshold ?
I think the threshold is 5, coz its the maximum weight
Randomly I believe. You randomly select weight and bias values and through training, the model selects the optimal values using the cost function to minimize the errors.
There isn't already a symbol to "hat" that is " ^ " ?? Why write the symbol's full name???
Because he’s teaching. Why do you care?
I wonder where you are writing?
See ibm.biz/write-backwards
Ok hold on, So your saying if the Neural Network searches the entire internet and there has not been any shark attacks then it would be safe to go swimming?
Perhaps
Like no 5K
how is he writing backwards????
Is not like human brain.
My brain functions different of what you are explaining now
Concord effect i interrupt
Lebsack Corners
Wow he literally wrote y hat :p instead of making hat on top of y
Good, but some parts were very poorly explained and rushed. You can't just say "we leverage supervised learning on labelled datasets" without explaining and expect people to understand 🤣
I WANT IT UNDER 50 SECONDS AND NO BODY GOT TIME FO THIS
So it's basically trying to simulate the way the brain processes data
He did not explain anything!
Why the videos are mostly garbage. Because they don't really fully explain why it works
A very poor explanation of Neural Networks
another not useful video. Copy and paste.
What markers are you using ?