Machine Learning vs Deep Learning

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 239

  • @patrickchan2503
    @patrickchan2503 3 місяці тому +39

    IBM teachers make a real effort to making learning fun, relatable and digestible.

  • @pranavgpr5888
    @pranavgpr5888 2 роки тому +430

    I'm still wondering how he wrote all of those from the opposite projection from us.

    • @koeniglicher
      @koeniglicher 2 роки тому +273

      He wrote in his natural wiriting direction and the video was flipped left to right during video production before uploading.

    • @soumyas383
      @soumyas383 Рік тому +13

      I had the similar query. It's amazing btw.

    • @MegaBenschannel
      @MegaBenschannel Рік тому +8

      I checked just to see if it was the first comment...

    • @rsstnnr76
      @rsstnnr76 Рік тому +9

      I'm pretty sure he just wrote on a tablet of some kind, recorded the screen he was writing on, keyed out the background in a video editor and overlaid and flipped during editing.

    • @albertkwan4261
      @albertkwan4261 Рік тому +16

      Lightboard is a glass chalkboard pumped full of light. It's for recording video lecture topics. You face toward your viewers, and your writing glows in front of you.

  • @Juanchicookie
    @Juanchicookie 2 роки тому +41

    Thank you for such a valuable explanation. The practical example revealed the potential of these methodologies and your charisma made the video easy to follow. Cheers!

  • @netzash
    @netzash Рік тому +21

    So next time I can't figure out what to have for dinner I just need to build a neural network?

  • @saadat_ic
    @saadat_ic Рік тому +16

    Wow! I am impressed how good you are at explanation such things. I was struggling with it. Thank you.

  • @stefanzander5956
    @stefanzander5956 2 роки тому +37

    Actually, the example is IMHO not well-suited for explaining ML and/pr DL as the aspect of "learning" (which is actually an optimization) is not really addressed by it. So it remains unclear a) what learning actually IS in terms of the example, and b) how the decision making can benefit from the learning aspect of the model.

  • @sdyeung
    @sdyeung 2 роки тому +148

    Unsupervised learning is not limited to deep learning. The classic ML method k-means clustering is already able to discover the similar patterns given the samples.
    I would say that the bright side of deep learning is on the feature extraction. In the old days, we do a lot of work to discover useful features: feature engineering. With deep learning, now we only need to supply the most basic features to the model, pixels for images, raw waveform or spectrogram for speech. This saves my days.

    • @estring123
      @estring123 Рік тому

      so do you think the need for labelled data will decrease or increase?

    • @arkaprovobhattacharjee8691
      @arkaprovobhattacharjee8691 Рік тому +5

      ​@@estring123 labeled data will still be valuable for some tasks, especially for fine-tuning models, validating performance, and solving new and specific problems. On top of that, having labeled data is critical for certain applications where high accuracy and interpretability are required for example medical diagnosis or safety-critical systems. Depending on the specific machine learning task and the type of data available, the balance between labeled and unlabeled data will vary.

    • @pedrorequio5515
      @pedrorequio5515 11 місяців тому +2

      @@estring123 Yes, you will still need labbeled data, the example given in the video is very bad and very wrong, deep learning models are a form of Supervised learning because like in the Video you might ask what in an image of a Pizza makes the algorithm know its a Pizza? The label Pizza is an arbitrary name given by people to it, you need the label to train the network.
      Back propgation isnt just going backwards like the video suggest, its the algorithm that actually make this Neural networks feasable from a computational possible other with it would be too slow.
      So why can this deep networks can "learn". The root of it is convolutional Neural networks, the convolutional layer take sections of image and isolants features, where previously the feature selection was crucial for success. Knowing the correct set of convolutional Layers on the other hand is not easy, so it was the combination with Genetic Optimization algorithms that have made them effective. However the output layer will still need labels, unsupervised learning is only useful to find useful features. But a classification problem needs labels this should be obvious, otherwise you cant classify.

    • @SATech-hub
      @SATech-hub 5 місяців тому

      @@pedrorequio5515 With all these explanation and terms did you also understand this as a beginner back then heh

  • @oghazal
    @oghazal Рік тому +12

    How did u determine the threshold? How did u come up with -5? Please explain this concept. Thanx!

    • @AshokKumar-rh2bg
      @AshokKumar-rh2bg 8 місяців тому

      I also want to know that

    • @SpeaksYourWord
      @SpeaksYourWord 6 місяців тому +1

      did you find out

    • @leonwu0422
      @leonwu0422 6 місяців тому +5

      To determine the ideal threshold, you need to consider the problem you're trying to solve, the available data, and the desired behavior of the model. In this case, the person wants to make a decision about ordering pizza based on three factors: saving time, losing weight, and saving money.
      Here's a step-by-step process to help determine an appropriate threshold:
      Understand the problem: The main goal is to make a reasonable decision about ordering pizza based on the given factors.
      Analyze the factors and weights:
      Saving time (X1) has the highest weight (5), so it's the most important factor.
      Losing weight (X2) has a weight of 3, so it's moderately important.
      Saving money (X3) has the lowest weight (2), so it's the least important factor.
      Consider the possible scenarios:
      Best case: X1 = 1, X2 = 1, X3 = 1
      Worst case: X1 = 0, X2 = 0, X3 = 0
      Other scenarios: Various combinations of X1, X2, and X3 values
      Evaluate the weighted sums for each scenario:
      Best case: (1 * 5) + (1 * 3) + (1 * 2) = 10
      Worst case: (0 * 5) + (0 * 3) + (0 * 2) = 0
      Other scenarios: Weighted sums will range between 0 and 10
      Choose a threshold that aligns with the desired behavior:
      If the person wants to order pizza only when all factors are favorable (best case), they could set the threshold to 10.
      If the person wants to order pizza more easily, even if some factors are not favorable, they could set the threshold to a lower value, like 5 or even 0.
      If the person wants to be more selective and only order pizza when the most important factors are favorable, they could set the threshold to a value between 5 and 10, like 7 or 8.
      Test and refine the threshold:
      After setting the initial threshold, the person should test the model with different input combinations and see if the decisions align with their expectations.
      If the model is making decisions that don't seem reasonable, they can adjust the threshold accordingly.
      In the given example, a threshold of 5 seems to be a good balance. It allows the model to decide to order pizza when the most important factor (saving time) is favorable, even if the other factors are not. However, it still requires a minimum level of overall favorability to make the decision.

  • @dhess34
    @dhess34 2 роки тому +68

    I love these videos. I just had a tech exec at a Fortune 200 company ask me for any podcasts that could help him stay abreast of current/emerging technology. I didn't have a great answer for him, but I did mention this series. He was looking for more audio-centric content though. Food for thought, @IBM Technology!

    • @IBMTechnology
      @IBMTechnology  2 роки тому +16

      We're glad you like the videos! As for a podcast, it's definitely something we're interested in, make sure you're subscribed, we'll be sure to announce it here, if and when it happens.

  • @Anusiri-r9s
    @Anusiri-r9s 5 місяців тому

    Thanks for breaking it down in a way that's easy to understand! Your explanation was engaging, and informative, and showed the power of AI and ML. You made a complex topic feel accessible.

  • @chris8534
    @chris8534 2 роки тому +11

    I hate the idea of weighting variables because if you change them you change the answer. Which to me suggests there is no right or wrong answer - but if you get it right for your business or problem it says to me figuring out how to weight the variables is actually where the true problem and data is.

    • @jichaelmorgan3796
      @jichaelmorgan3796 Рік тому +2

      Introduces bias, which, depending on the scope, would include not just personal bias, but company bias, industry bias, and political bias. Weights and models have this issue.

  • @aanifandrabi5415
    @aanifandrabi5415 2 роки тому +14

    I don't completely agree on deep learning explanation, because for weight training, labelling is required. Yes pattern/feature extraction can be debated, but labelled data is required

  • @Jeong5499
    @Jeong5499 2 роки тому +1

    Your smile made me really enjoy the whole video! Thank you for the wonderful video : )

  • @Escrieg89
    @Escrieg89 Рік тому +1

    I like your style... you IBM people are smart....

  • @ABCEE1000
    @ABCEE1000 3 місяці тому

    you are the master of simplicity .thank you so much

  • @swoopskee
    @swoopskee 3 місяці тому

    Such an awesome explanation, I finally understand the differences between all these different technologies. Thank you so much!

  • @suparnaprasad8187
    @suparnaprasad8187 11 місяців тому +1

    Awesome videos! Love your teaching method!

  • @armanrangamiz3813
    @armanrangamiz3813 Рік тому +9

    It was a great explanation for ML and DL. That Neural Network was a key detail for understanding The difference between ML and DL and their Fundamentals.

  • @ai-interview-questions
    @ai-interview-questions 11 місяців тому +1

    Thank you! It was a great explanation!

  • @crazetalks6854
    @crazetalks6854 11 місяців тому +1

    the way he explained ! Boommed my mind

  • @IgorOlikh
    @IgorOlikh 2 роки тому +1

    I appreciate you for broadening my horizons on the subject.

  • @aaditya_s3301
    @aaditya_s3301 20 днів тому

    great explaination😃

  • @jvarella01
    @jvarella01 Рік тому

    From 1-10 this is 20!! Thanks!

  • @davidgp2011
    @davidgp2011 2 роки тому +3

    Fantastic distillation of the concepts.
    Are the presenters mirror images to make their writing appear the way it does or is it another tech trick?

  • @coffiberengerhoundefo1259
    @coffiberengerhoundefo1259 Рік тому +1

    Please provide, is multi layer neural network a deep learning model ? If not, please provide me an example of deep learning model.

  • @syedasim6813
    @syedasim6813 Рік тому +2

    Thank you so much. You have explained it brilliantly ❤

  • @georgeiskander2458
    @georgeiskander2458 2 роки тому +7

    I think there is a confusion between feature extraction and unsupervised learning. Hope that you can revise it

  • @velo1337
    @velo1337 2 роки тому +1

    where are all the neurons, weights and biases stored? in ram, in a database? what datastructure is used?

  • @Yann-v3j
    @Yann-v3j 9 місяців тому

    very easy well explained thanks!

  • @JustinBerlowski
    @JustinBerlowski Місяць тому

    the way Aliagents structures their AI agents is groundbreaking, can’t wait to see what’s next

  • @leander9263
    @leander9263 10 місяців тому

    4:30 but if your interest in staying lean is 10000, the equation still comes to the same conclusion. shouldnt X2 therefore be a choice between -1 and +1?

  • @negusuworku2375
    @negusuworku2375 Рік тому

    Hi there. Very helpful. Thank you.

  • @Ari-pq4db
    @Ari-pq4db 6 місяців тому

    This is awesome, thank you ♥

  • @hansbleuer3346
    @hansbleuer3346 Рік тому +2

    Superficial explanation.

  • @holger9414
    @holger9414 Рік тому +1

    Great Video. I would like to understand more details about the layers. What are layers from a logical and technical prospective?

    • @LightDante
      @LightDante Рік тому

      They are computing processes, I think.

  • @ahmedi.b.m8185
    @ahmedi.b.m8185 Рік тому

    Excellent video. Thank you

  • @davidzhang7318
    @davidzhang7318 3 місяці тому

    Pizza, Burgers, Tacos. A distinguishing factor between them is their breading. So a NN can learn from human supervision that a pizza image is labeled pizza because of its' carbohydrate type by visually being taught that this breading is pizza dough. Then you can train the NN to identify the ingredients of the dough by training the NN on features of labeled dough (dough such as white flour or almond flour) like their color and texture or unsupervisingly training the NN to distinguish each food's breading by their color or other external features like texture looks (golden bubbles or white flour on the breading) Then the NN can identify a pizza's carbohydrate type from being fed images of pizza dough that's labeled pizza dough manually
    In the example of pizza, burgers, and tacos, if a NN is fed a labeled data set of different yellows and gold, and outputs the correct answer to the human input, then the NN's training on colors can be used to identify the dough type,
    My reflections

  • @tzimisce1753
    @tzimisce1753 Рік тому

    TL;DR:
    If an NN has more than 3 layers, it's considered a DNN.
    DL finds patterns on its own without human supervision, and learns from them. It's a more specific type of ML.

  • @TheReal4L3X
    @TheReal4L3X Рік тому +1

    bro managed to make an example about pizza... and i was eating it while watching this video 💀

  • @abdulrahmanelawady4501
    @abdulrahmanelawady4501 7 місяців тому

    In back propagation. The error is synonymous for weight?

  • @skywave12
    @skywave12 2 роки тому

    I programmed a 8080 to Jump Non Zero at times. Full Machine code to make side street and main street traffic lights. Worked first time with no bugs.

  • @khaledsrrr
    @khaledsrrr Рік тому +1

    Phenomenal easy explanation ❤

  • @JamalNasir-n7l
    @JamalNasir-n7l 3 місяці тому

    Can someone help me to know what are the tools this guy has been using in this video? like the one transparent material been writing on

  • @dinasadataledavood5719
    @dinasadataledavood5719 8 місяців тому

    Thank you for your useful video🙏🏻

  • @shankar_p
    @shankar_p 5 місяців тому

    Great job 👏

  • @MikeWiest
    @MikeWiest Рік тому +3

    Thank you! Summary: deep learning is not so deep after all!

  • @bibintb
    @bibintb Рік тому

    The presentation was amazing!

  • @NowayJose14
    @NowayJose14 Рік тому

    Bless UA-cams play speed feature.

  • @CBMM_
    @CBMM_ Рік тому

    Great. I was always thinking NN and DL are two words for the same thing.

  • @lefebvre4852
    @lefebvre4852 Рік тому

    Great explanation

  • @nandagopal375
    @nandagopal375 2 роки тому +1

    Thank you for valuable information 🙏🙏

  • @nadimetlavishwet1355
    @nadimetlavishwet1355 Рік тому

    You used threshold as 5 what actually threshold means according to your example of pizza ?

  • @PedroAcacio1000
    @PedroAcacio1000 Рік тому +1

    I'm impressed how he can write backwards so good haha

  • @GNU_Linux_for_good
    @GNU_Linux_for_good 10 місяців тому +1

    00:20 No Sir - won't do that. Can't learn while digesting pizza.

  • @shravanNUNC
    @shravanNUNC Рік тому

    Charismatic presentation...

  • @emrekuslu4418
    @emrekuslu4418 4 місяці тому

    I was hungry ignorant before this video. Now I learned about deep learning but it made me deeply hungry.

  • @KL4NNNN
    @KL4NNNN 2 роки тому +1

    I do not understand about the input Zero 0. Whatever weight you give to it, it will always evaluate to 0 so either you give it weight 1 or weight 5 the outcome is the same. What is the catch?

  • @NurserytoVarsity
    @NurserytoVarsity Рік тому

    You're making education engaging and accessible for everyone. #NurserytoVarsity

  • @GregMartini-s1x
    @GregMartini-s1x Рік тому

    That was very interesting and a great explanation of machine and deep learning.

  • @profangelinessurgery
    @profangelinessurgery 22 дні тому

    how awesome thanks aton sir

  • @mtrapman
    @mtrapman 2 роки тому +1

    I don't understand how you suddenly use 1(yes) and 0(no) as numbers to calculate with?

    • @michaelschmidlehner
      @michaelschmidlehner Рік тому

      Yes, any weight attributed to x2 will result in 0. Can someone please explain this?

  • @mhmchandanaprabashkumara7053
    @mhmchandanaprabashkumara7053 8 місяців тому

    Thanks for the information given to me.

  • @TzOk
    @TzOk 7 місяців тому

    I've always thought that supervised learning is classification, and unsupervised is clustering. Thus DL is always a supervised learning because it still needs a labeled learning set. The differentiation between NN and DL is only in the feature extraction part, NN and "classic" ML require expert knowledge to shape input features, which are computed from the raw data and often normalized. In other words, DL doesn't require labeled features but still needs labeled data to learn from. Also, ML is not only NN but also rule induction algorithms (decision trees, Bayesian rules).

  • @pedrohsmarini1
    @pedrohsmarini1 Рік тому +1

    Maravilhoso! Amei o vídeo, nota 1000000...

  • @Omar-fu4jj
    @Omar-fu4jj Рік тому

    I didn't know that Gordon Ramsay gives lessons about Machine learning and deep learning. for real tho the video was amazing and very helpful

  • @fabri1314
    @fabri1314 10 місяців тому +1

    humanities are fundamental in this proccesses! now the funny example is pizza, what about human rights? who's feeding the bias to the algorithms???

  • @stevesuh44
    @stevesuh44 Рік тому

    Content is great. Audio is too low on these videos.

  • @syedhaiderkhawarzmi6269
    @syedhaiderkhawarzmi6269 2 роки тому

    the moment he said pizza, i just pause and ordered one and resume when i got pizza.

  • @ekramahmed9426
    @ekramahmed9426 7 місяців тому

    Thank you for your amazing and funny explanation

  • @sagarkafle9259
    @sagarkafle9259 2 роки тому +1

    how is it possible for you to write 🙏😅
    looking at us
    which way is the board?

    • @sagarkafle9259
      @sagarkafle9259 Рік тому

      noticed he's been writing with a left hand😇

    • @michaelschmidlehner
      @michaelschmidlehner Рік тому

      It is very simple, in most video editing programs, to flip a video horizontically.

  • @jel1951
    @jel1951 2 роки тому +1

    You did well explaining mate, no idea what they’re talking about

  • @Nikos10
    @Nikos10 Рік тому

    Do you write mirrorwise?

  • @olvinlobo
    @olvinlobo 2 роки тому +1

    Nice, loved it.

  • @annnaj7181
    @annnaj7181 Рік тому

    why 'Threshold' was 5 ?

  • @goulis14
    @goulis14 Рік тому

    is there any connection b2n Semi and Reinforcement Learning

  • @krishnarajl4251
    @krishnarajl4251 2 місяці тому

    Good one

  • @minhtriettruong9217
    @minhtriettruong9217 Рік тому +1

    "It's time for lunch!" lol. I love this video. Thanks so much!

  • @waynelast1685
    @waynelast1685 Рік тому

    So is it possible to have unsupervised Machine Learning?

  • @SchoolofAI
    @SchoolofAI 2 роки тому

    Steve Brunton style is becoming a genre...

  • @Lecalme23
    @Lecalme23 Рік тому

    Thank you

  • @mikkeljensen1603
    @mikkeljensen1603 2 роки тому +4

    Plot twist, most people were eating while watching this video.

  • @poojithatummala1752
    @poojithatummala1752 10 місяців тому

    Threshold value 5 means what sir!?

    • @rafiksalmi2826
      @rafiksalmi2826 10 місяців тому +1

      If the sum is inferior than this threshold , so the decision is negative

  • @SATech-hub
    @SATech-hub 5 місяців тому

    Hmm so how did the subtraction of 5 threshold play an impact? What does that even mean at all? Simple and Great explanantion!!!!

  • @Mohammed-ix5je
    @Mohammed-ix5je Рік тому

    Thanks!

  • @Parcha24
    @Parcha24 2 роки тому

    Very nice bhai 👌🏻

  • @PhilipaLubbers
    @PhilipaLubbers 5 місяців тому

    It's so interesting.

  • @NadiraRyskulova
    @NadiraRyskulova Рік тому +7

    Dear Martin Keen, I really liked your video and find it extremely useful. However, I wanted also discuss about activation function so the formula you used is - (x1*w1)+(x2*w2)+(x3w3)-threshold. As I understood the threshold is a biggest number used, so that's why you took number 5? Also our w2 is equals to 0, so if the w2 even would be 999999999 (like for us it is super important to be fit) the answer for whole equation would still be positive. So this is my concern about formula if w2 id more prevalent than other options, why in any possible situation we are only capable to have the answer YES ORDER PIZZA. and even x1 and x2 would be 0, but x3=1, with w1 and w2 equaled to 899796 or any other big number we will still get positive outcome. This really baffled me, so I would happy to read your response!

    • @johnlukose3257
      @johnlukose3257 Рік тому +4

      Hello, I think this can be solved by replacing the number '0' with a '-1'.
      By doing so I guess it will be a more fair output based on our preferences.
      Good question btw 👍

    • @rahaam5421
      @rahaam5421 Рік тому

      My question as well.

  • @canadianZanchari
    @canadianZanchari 11 місяців тому

    I loved it❤️

  • @matthewpeterson431
    @matthewpeterson431 2 роки тому +1

    Homebrew Challange guy!

  • @HSharpknifeedge
    @HSharpknifeedge Рік тому

    Thank you :)

  • @jimklerkx5092
    @jimklerkx5092 5 місяців тому +1

    Why is the -5? 5 + 0 + 2 = 7 WHY -5? please explain to me im a junior!

  • @extraktAI
    @extraktAI 2 місяці тому

    Honestly, you had me at pizza

  • @ugoernest3790
    @ugoernest3790 2 роки тому

    Beautifulllllllll ❤️❤️❤️😊

  • @fastrobreetus
    @fastrobreetus 3 місяці тому

    TY

  • @wokeclub1844
    @wokeclub1844 Рік тому

    Then what is PCA, Regressions etc..?!

  • @rafiksalmi2826
    @rafiksalmi2826 10 місяців тому

    Thanks a lot

  • @sujayr4502
    @sujayr4502 2 місяці тому

    Adam Gilchrist in the house

  • @lazzybug007
    @lazzybug007 Рік тому

    What a coincidence lol.. im eating burger when clicked on this video 😅😅

  • @abdelhaibouaicha3293
    @abdelhaibouaicha3293 10 місяців тому

    📝 Summary of Key Points:
    📌 Deep learning is a subset of machine learning, with neural networks forming the backbone of deep learning algorithms.
    🧐 Machine learning uses structured labeled data to make predictions, while deep learning can handle unstructured data without the need for human intervention in labeling.
    🚀 Deep neural networks consist of more than three layers, including input and output layers, and can automatically determine distinguishing features in data without human supervision.
    💡 Additional Insights and Observations:
    💬 Quotable Moments: "Neural networks are the foundation of both machine learning and deep learning, considered subfields of AI."
    📊 Data and Statistics: The threshold for decision-making in the example model was set at 5, with weighted inputs influencing the output.
    🌐 References and Sources: The video emphasizes the role of neural networks in both machine learning and deep learning, highlighting their importance in AI research.
    📣 Concluding Remarks:
    The video effectively explains the relationship between machine learning and deep learning, showcasing how neural networks play a crucial role in both fields. Understanding the distinctions in layer depth and human intervention provides valuable insights into the evolving landscape of AI technologies.
    Made with Talkbud

  • @KepaTairua
    @KepaTairua 2 роки тому +5

    So I do like this series, but this confused me because he switched from one output - "should I buy pizza" - to another output - "is this a pizza or a taco". Is this a fundamental difference in what DL vs ML is able to do? Or that the first output doesn't require as many layers to become a neural network so therefore would always sit at a DL level? Sorry, I think I need to do more study and come back to this video