Holy Cow, 6 minutes wasted. Lots of words, no real explanation. Training is the process of passing the training dataset forward & backward through the network structure adjusting the weights to reduce the losses X number of times. When the losses have been reduced to an acceptable level, the model's structure and the weights & biases are saved. Inference is the process of loading the previously saved model's structure and the weights & biases into memory and then running feature sets (X values) of new data through the model producing predictions (y values)... Training/learning is creating "intelligence". Inference can be thought of as "thinking" with the artificially intelligent "brain" that was created through training.
Training is to find the parameters in the function with a large scale data and computing resources. Inference is to apply the function we have already made clear to simple environment and smaller platform.
Hi Thomas, I have few questions as I am a newbies in AI, machine learning and deep learning. Do we still need to use special devices to apply or run trained model (inference)? Because, I saw there there few devices out there for that like Jetson Nano, Google Coral USB Accelerator and so on. What is the advantage of getting help from these devices rather than directly running on PC or raspberry pi ? My question may be silly but I need an answer. Thanks
Power consumption may be a big factor with the PC and they're probably better suited for most individual level training of other less power consuming computers for inference. That's unless your project is massive or server based? Raspberry Pi doesn't really have the ability to inference as fast as specialized SBC's, but accelerators could be added to make them more suitable.
I mean first get into the topic first or keep the introduction very, very short. Couse you have only 6 min. OUt of 6 min the ifirst 1.25 transmit no meaningful information.
cats unfortunately dont have an inference engine to distinguish cats from mirror reflections. i bought an Orange pi iot which is an Arm Cortex A5 SBC, and i'm investigating the Arm inference engine named ... lets see ... Linaro ua-cam.com/video/VYY6RbrzEr8/v-deo.html and the distant goal is to get a lightweight biped robot keep balanced while walking
To answer your question, if the y-predicted value matches exactly the y value, weights and biases are not changed. But normally, you would not train on single images but rather on batches. And its unlikely that the model during training will get every image of a single batch exactly correct. If it does, its likely your overfitting the data. Weights and biases change based on the error produced when passing features forward through the structure. The loss function compares the output from the forward pass with the actual y value (error). Weights and biases are then adjusted through backprop to reduce the error slightly. The process is repeated for the next batch of data for X epochs.
If anyone here actually wants their question answered, this image from an NVidea blog explains Inference really well. blogs.nvidia.com/wp-content/uploads/2016/08/ai_difference_between_deep_learning_training_inference.jpg
the antics at the intro is purely unnecessary it's cringy and don't do that! just get straight to the fucking point and we'll get it.. most of your audience are obviously serious people ,you know that given the material you are presenting!
Video starts at 1:21.
Thanks for the tip....🤔
@@ThomasHenson Thanks 🤔
@@ThomasHenson how about skip all the BS next time instead of thanking him?
Seriously man get to the point!
Holy Cow, 6 minutes wasted. Lots of words, no real explanation. Training is the process of passing the training dataset forward & backward through the network structure adjusting the weights to reduce the losses X number of times. When the losses have been reduced to an acceptable level, the model's structure and the weights & biases are saved.
Inference is the process of loading the previously saved model's structure and the weights & biases into memory and then running feature sets (X values) of new data through the model producing predictions (y values)...
Training/learning is creating "intelligence". Inference can be thought of as "thinking" with the artificially intelligent "brain" that was created through training.
I checked this comment at the 1 minute mark, thank you for saving me 5 minutes.
thank you for this comment :)
Doesn't inference require compute power too? Pretty sure models still need GPUs once they're being utilized by the end user.
What was the difference anyway???? confused more!
DUDE...inference is the process of using an AI model to analyze new data and make predictions. You just going round and round get to the point.
Training is to find the parameters in the function with a large scale data and computing resources. Inference is to apply the function we have already made clear to simple environment and smaller platform.
wipe this video please. Inference is something completely different! Please check your sources as this is 100% wrong information!!!
I agree
Good explanation, but you talk too much bro
+Axel vulsteke It’s a blessing or a curse not sure which 😀. Thanks for tuning in.
6 minutes on a complex subject is talking too much? You're an idiot.
you can easily skip on 2nd minute.
But what is inference? Just using the ai?
How is inference affected by location of GPU servers?
Hi Thomas, I have few questions as I am a newbies in AI, machine learning and deep learning. Do we still need to use special devices to apply or run trained model (inference)? Because, I saw there there few devices out there for that like Jetson Nano, Google Coral USB Accelerator and so on. What is the advantage of getting help from these devices rather than directly running on PC or raspberry pi ? My question may be silly but I need an answer. Thanks
Power consumption may be a big factor with the PC and they're probably better suited for most individual level training of other less power consuming computers for inference. That's unless your project is massive or server based? Raspberry Pi doesn't really have the ability to inference as fast as specialized SBC's, but accelerators could be added to make them more suitable.
Good video bro !
How do we calculate the inference speed metric for ML model ?
I mean first get into the topic first or keep the introduction very, very short. Couse you have only 6 min. OUt of 6 min the ifirst 1.25 transmit no meaningful information.
Would have been better if you included some math, code and/or diagrams to explain what's really going on. Just general information, not much useful.
Seemed pretty fucking simple to me
cats unfortunately dont have an inference engine to distinguish cats from mirror reflections.
i bought an Orange pi iot which is an Arm Cortex A5 SBC, and i'm investigating the Arm inference engine named ... lets see ... Linaro ua-cam.com/video/VYY6RbrzEr8/v-deo.html
and the distant goal is to get a lightweight biped robot keep balanced while walking
Those cats are always messing things up....😀 Thanks for tuning in!
Starts at 2:06
Do weights and biases change when we change the Image?
To answer your question, if the y-predicted value matches exactly the y value, weights and biases are not changed. But normally, you would not train on single images but rather on batches. And its unlikely that the model during training will get every image of a single batch exactly correct. If it does, its likely your overfitting the data.
Weights and biases change based on the error produced when passing features forward through the structure. The loss function compares the output from the forward pass with the actual y value (error). Weights and biases are then adjusted through backprop to reduce the error slightly. The process is repeated for the next batch of data for X epochs.
Enjoyed the video. Thanks
Thanks for watching! 👍
Get to the point OMG
Very poor answer
If anyone here actually wants their question answered, this image from an NVidea blog explains Inference really well. blogs.nvidia.com/wp-content/uploads/2016/08/ai_difference_between_deep_learning_training_inference.jpg
I don’t think you answered the question, thumbs down
the antics at the intro is purely unnecessary it's cringy and don't do that! just get straight to the fucking point and we'll get it.. most of your audience are obviously serious people ,you know that given the material you are presenting!
Nice video
Thanks for watching!
nice arms bro ... do u lift?
I sure try to 😉
He trains 😂
yeah this was useless
Absolutely Garbage 🗑️🗑️