What is an RBM (Restricted Boltzmann Machine)?

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 51

  • @martinfunkquist5342
    @martinfunkquist5342 Рік тому +30

    What is the difference between an RBM and a regular feed-forward network? They seem quite similar to me.

    • @kristoferkrus
      @kristoferkrus Рік тому +20

      An RBM sends signals both "forwards" and "backwards" during inference, uses contrastive divergence for learning the weights and does not involve a loss function, while a feedforward network only sends signals forwards during inference and uses backpropagation and gradient descent for learning the weights, which requires a loss function. Besides, the RBM is energy-based (hence it has an energy function which can be said to be instead of a loss function) and follows (a simplified version of) the Boltzmann distribution (that doesn't include k and T), so it is stochastic, while a feedforward network isn't energy-based, but is instead deterministic.

    • @samcoding
      @samcoding 8 місяців тому +3

      Someone correct me if I'm wrong, but in a simpler, higher level view to what @kristoferkrus said, feedforward networks just take an input, pass to hidden layer(s) and produce an output. In that order and direction.
      RBMs take an input and pass to the hidden layers. Then the hidden layers pass it back to the input layers to generate the output.

  • @ahmedsowdagar9034
    @ahmedsowdagar9034 2 роки тому +10

    I was constantly searching for the examples of what is visible layer and hidden layer. This video explained me what it is. Thanks

  • @nemeziz_prime
    @nemeziz_prime 2 роки тому +7

    It'd be great if IBM could make a dedicated deep learning playlist consisting of videos such as this

  • @INSIDERSTUDIO
    @INSIDERSTUDIO Рік тому +3

    No one notice but man writing in reverse 😮 how hard he train to do that 🔥🙌

    • @IBMTechnology
      @IBMTechnology  Рік тому

      See ibm.biz/write-backwards for the backstory

    • @Abhilashaisgood
      @Abhilashaisgood 6 місяців тому

      i think they mirrored the videoo, its really cool write smthing in a glass then open your selfie cameraa , from back camera they look reversed and form front camera they lookk samee!

  • @erikslorenz
    @erikslorenz 2 роки тому +8

    I am incredibly motivated to build a light board

  • @Tumbledweeb
    @Tumbledweeb 2 роки тому +1

    I have indeed made a decision; The one that brought me to this video here! I'm here looking for photos of any of these Boltzmann Machines.

  • @quocanhnguyen7275
    @quocanhnguyen7275 2 роки тому +18

    So bad, doesn't say anything. THis is just any neural network

  • @vgreddysaragada
    @vgreddysaragada Рік тому +2

    You made it simple..elegant presentation..Great work..Thank you..

  • @tanishasethi7363
    @tanishasethi7363 2 роки тому

    i love how he's smiling throughout the vid

  • @_________________404
    @_________________404 14 днів тому

    Right. Multi Layer Perceptron fully connected network or something.

  • @siddharthagrawal8300
    @siddharthagrawal8300 2 роки тому +3

    This just sounds like a neural network without any output?

  • @oualda12
    @oualda12 2 роки тому +2

    Thanks for the video, I'am new in this domain, I want to ask if the RBM have only two layers (visible and hidden) how can we get the output of this RBM, should we add an output layer to get result or what?
    thank you again.

  • @freespam9236
    @freespam9236 2 роки тому

    watching "AI Essentials" playlist - no recommendations engine in the play right now

  • @hitarthpanchal1479
    @hitarthpanchal1479 2 роки тому +4

    How is it different from standard ANNs

    • @MartinKeen
      @MartinKeen 2 роки тому +3

      Thanks for watching Hitarth. Basically the thing that makes an RBM different to a standard artificial neural network is the RBM has connections that go both forward and backwards (the feed forward pass and feed backward pass) which makes an RBM very adept at adjusted weighting and bias based on observed data.

    • @andreaabeliano4482
      @andreaabeliano4482 2 роки тому

      At very high level, one main difference is that ANNs typically are classifiers, they need labels, also to train and get the weights of the edges.

    • @apostolismoschopoulos1876
      @apostolismoschopoulos1876 2 роки тому

      @@andreaabeliano4482 using RBMs, we are not interested on the weights of the edges? Aren't the final weights the probabilities that after someone watches video A will watch video B? Am I understanding this correctly?

    • @Kryptoniano-n6m
      @Kryptoniano-n6m 2 роки тому

      @@apostolismoschopoulos1876 Similar to ANN, In RBM we're *TOTALLY* interested on adjusting the weights and biases. And yes, a trained net (weights, biases) will tell us about the probabilities of visible units after sampling.

    • @Kryptoniano-n6m
      @Kryptoniano-n6m 2 роки тому +4

      As someone told, the main difference is that ANN is supervised learning with targets to predict, while RBM is an unsupervised learning method. Other differences:
      Objective: ANN -> learns a complex function, RBM -> learns a probabilty function
      What does: ANN -> predicts output, RBM -> estimate probable group of variables (visible and latent)
      Training algorithm: ANN -> backpropagation, RBM -> contrastive divergence
      Basic principle: ANN -> decreases a cost function, RBM -> decreases an energy function (probability function)
      Weights and biases: ANN -> deterministic activation of units, RBM -> stochastic activation of units

  • @pavanpandya9080
    @pavanpandya9080 Рік тому

    Beautifully Explained. Thank you for the Video!

  • @alyashour5861
    @alyashour5861 Рік тому +1

    How is bro writing backwards perfectly the whole time

  • @abdulsaboor2168
    @abdulsaboor2168 7 місяців тому

    How its different from simple ann with backprotogation??

  • @bzqp2
    @bzqp2 2 роки тому +1

    Wait. So are the weights summed up to activate the nodes in the hidden layer or does the sum represent the probability of activating of a node?

    • @Kryptoniano-n6m
      @Kryptoniano-n6m 2 роки тому

      Both. The weights and biases are used to estimate the hidden units sampling p(h|v) or to estimate the visible units sampling p(v|h). So although we can speak about activation of units it is not a deterministic process. And in the other hand the weights and biases, along with hidden and visible units, are used to calculate the energy of the system, which is considered as probability as well.

  • @amishajain3400
    @amishajain3400 2 роки тому

    Beautifully explained Thank You!

  • @vishnupv2008
    @vishnupv2008 2 роки тому +1

    In which Neural network is nodes on a given layer is connected to other nodes in the same layer?

  • @svanvoor
    @svanvoor Рік тому

    Either (a) this guy is very good at mirror writing, or (b) they mirrored the video after recording. Given he's writing with his left hand, and given the .9 probability of right-handedness as a Bayesian prior, I assume P(b)>P(a).

  • @smallstep9827
    @smallstep9827 2 роки тому +1

    sample?

  • @mathewssaiji5149
    @mathewssaiji5149 2 роки тому

    waiting for your recommendation system and explainable recommendation system videos

  • @high_fly_bird
    @high_fly_bird 2 роки тому

    Charismatic speaker! But I think the theme of hidden layers is not clear enough - hidden layers usually are not interpreted. Maybe u were talking about hidden layers? And it woulb be cool if you actually make an example of WHAT exactly is passed to the visible layer. Numbers? Which numbers?

  • @JudsonLouis-w7w
    @JudsonLouis-w7w 2 місяці тому

    68273 Gerhold Locks

  • @CynthiaWhite-l6v
    @CynthiaWhite-l6v 2 місяці тому

    Turcotte River

  • @ScottSchwartz-l4t
    @ScottSchwartz-l4t 3 місяці тому

    Carroll Lakes

  • @CarolRobinson-f8s
    @CarolRobinson-f8s 2 місяці тому

    Dooley Freeway

  • @AnnaPerez-e1k
    @AnnaPerez-e1k 3 місяці тому

    Halie Coves

  • @KimberlyBrown-k4j
    @KimberlyBrown-k4j 3 місяці тому

    Adalberto Courts

  • @akaashraj8796
    @akaashraj8796 2 місяці тому

    how is he writing backwards?

    • @IBMTechnology
      @IBMTechnology  Місяць тому

      If you want to see how it's done, check out this behind the scenes video ua-cam.com/video/Uoz_osFtw68/v-deo.htmlfeature=shared

  • @PunmasterSTP
    @PunmasterSTP 6 місяців тому

    Restricted Boltzmann Machine? More like "Really cool network that's just the thing!" 👍

  • @danielebarnabo43
    @danielebarnabo43 2 роки тому

    What the hell? Is this what you do when you're not brewing?

  • @BruceDennis-k2y
    @BruceDennis-k2y 3 місяці тому

    Wehner Land

  • @zzador
    @zzador 6 місяців тому

    You not really knowing what you talking about. The weights are just weights and NOT probabilities. The summed activity of a unit fed through the sigmoid function is the activation probability of a unit in an RBM and definetely NOT the weights.

  • @Tom-qz8xw
    @Tom-qz8xw 8 місяців тому

    Terrible explanation, whats the point of making these videos if you dont show equations, you didnt mention KL divergence or anything technical, practically useless

  • @flor.7797
    @flor.7797 Рік тому

    😂