Berkeley AI
Berkeley AI
  • 101
  • 401 828

Відео

Spring 2016 Section 11 (Neural Networks) Overview
Переглядів 2,5 тис.8 років тому
Spring 2016 Section 11 (Neural Networks) Overview
Spring 2016 Section 11 (Perceptrons + Neural Networks) Solutions
Переглядів 2,3 тис.8 років тому
Spring 2016 Section 11 (Perceptrons Neural Networks) Solutions
Spring 2016 Exam 8 Solutions
Переглядів 2,7 тис.8 років тому
Spring 2016 Exam 8 Solutions
Spring 2016 Section 10 (Naive Bayes + Perceptrons) Solutions
Переглядів 2,7 тис.8 років тому
Spring 2016 Section 10 (Naive Bayes Perceptrons) Solutions
Spring 2016 Section 10 (Naive Bayes + Perceptrons) Overview
Переглядів 2,7 тис.8 років тому
Spring 2016 Section 10 (Naive Bayes Perceptrons) Overview
Spring 2016 Section 9 (HMMs + Particle Filters) Solutions
Переглядів 4 тис.8 років тому
Spring 2016 Section 9 (HMMs Particle Filters) Solutions
Spring 2016 Section 9 (HMMs + Particle Filters) Overview
Переглядів 6 тис.8 років тому
Spring 2016 Section 9 (HMMs Particle Filters) Overview
Spring 2016 Section 8 (Sampling + VPI) Solutions
Переглядів 1,9 тис.8 років тому
Spring 2016 Section 8 (Sampling VPI) Solutions
Spring 2016 Section 8 (Sampling + VPI) Overview
Переглядів 2,3 тис.8 років тому
Spring 2016 Section 8 (Sampling VPI) Overview
Spring 2016 Exam Section 6 Solutions
Переглядів 2,5 тис.8 років тому
Spring 2016 Exam Section 6 Solutions
Spring 2016 Exam Section 5 Solution
Переглядів 2,6 тис.8 років тому
Spring 2016 Exam Section 5 Solution
Spring 2016 Section 7 (Bayes Nets + Variable Elimination) Solutions
Переглядів 7 тис.8 років тому
Spring 2016 Section 7 (Bayes Nets Variable Elimination) Solutions
Spring 2016 Section 7 (Bayes Nets + Variable Elimination) Overview
Переглядів 11 тис.8 років тому
Spring 2016 Section 7 (Bayes Nets Variable Elimination) Overview
Spring 2016 Section 6 (RL + Probability) Solutions
Переглядів 2,2 тис.8 років тому
Spring 2016 Section 6 (RL Probability) Solutions
Spring 2016 Section 6 (RL + Probability) Overview
Переглядів 2,3 тис.8 років тому
Spring 2016 Section 6 (RL Probability) Overview
Spring 2016 Section 5 (MDPs + RL) Solutions
Переглядів 5 тис.8 років тому
Spring 2016 Section 5 (MDPs RL) Solutions
Spring 2016 Exam Section 3 Solution
Переглядів 2,1 тис.8 років тому
Spring 2016 Exam Section 3 Solution
Spring 2016 Section 5 (MDPs + RL) Overview
Переглядів 3,1 тис.8 років тому
Spring 2016 Section 5 (MDPs RL) Overview
Spring 2016 Section 4 (Games + MDPs) Solutions
Переглядів 2,6 тис.8 років тому
Spring 2016 Section 4 (Games MDPs) Solutions
Spring 2016 Section 4 (Games + MDPs) Overview
Переглядів 2,4 тис.8 років тому
Spring 2016 Section 4 (Games MDPs) Overview
Lecture 9 MDPs II
Переглядів 1,1 тис.8 років тому
Lecture 9 MDPs II
Lecture 8 MDPs I
Переглядів 11 тис.8 років тому
Lecture 8 MDPs I
Spring 2016 Section 3 (CSPs + Games) Overview
Переглядів 2,1 тис.8 років тому
Spring 2016 Section 3 (CSPs Games) Overview
Spring 2016 Section 3 (CSPs + Games) Solutions
Переглядів 2,2 тис.8 років тому
Spring 2016 Section 3 (CSPs Games) Solutions
Spring 2016 Section 2 (Graph Search + CSPs) Solutions
Переглядів 2,2 тис.8 років тому
Spring 2016 Section 2 (Graph Search CSPs) Solutions
Spring 2016 Section 2 (Graph search + CSPs) Overview
Переглядів 2,2 тис.8 років тому
Spring 2016 Section 2 (Graph search CSPs) Overview
Lecture 5 CSPs II
Переглядів 13 тис.8 років тому
Lecture 5 CSPs II
Lecture 4 CSPs I
Переглядів 20 тис.8 років тому
Lecture 4 CSPs I
Spring 2016 Section 1 (Search) Solutions
Переглядів 2,2 тис.8 років тому
Spring 2016 Section 1 (Search) Solutions

КОМЕНТАРІ

  • @ranaarslan8040
    @ranaarslan8040 8 місяців тому

    volume is too much low

  • @anomalous5048
    @anomalous5048 8 місяців тому

    thank you so much.

  • @shell925
    @shell925 11 місяців тому

    Thank you, could be please share the homework link here if it's possible?

  • @zaranto7023
    @zaranto7023 Рік тому

    Thank you

  • @karanacharya18
    @karanacharya18 Рік тому

    Fantastic video explanation! Crisp, clear and formula-based. Easy to follow once you know the concepts and this video helps us clear the confusion among these fancy terms like joint, conditional and independence.

  • @vagabond7199
    @vagabond7199 Рік тому

    The audio is not clear. Very bad audio.

  • @vagabond7199
    @vagabond7199 Рік тому

    26:43 Isn't it Smoke is conditionally independent of Alarm given Fire?

    • @RajarshiBose
      @RajarshiBose 9 місяців тому

      Traditional Fire Alarm detects smoke not fire , so if there are other reason of smoke like someone smoking, it can increase the chance of alarm though it is not related to fire broken out.

  • @vagabond7199
    @vagabond7199 Рік тому

    20:43 His explanation is quite confusing.

  • @vagabond7199
    @vagabond7199 Рік тому

    The audio is not so clear.

  • @Melianareginali
    @Melianareginali 2 роки тому

    Haha

  • @boccaccioe
    @boccaccioe 2 роки тому

    Good explanation of likelihood weighting, very helpful

  • @aliamorsi6148
    @aliamorsi6148 2 роки тому

    The content here flows extremely well. Thank you for making it public.

  • @ulissemini5492
    @ulissemini5492 3 роки тому

    start at 9:22 if you know probability, if you don't this is a terrible introduction and I'd suggest watching the 3b1b videos on bayes rule. a good textbook is intro to probability by blitzstein hwang

  • @fratdenizmuftuoglu4755
    @fratdenizmuftuoglu4755 3 роки тому

    It is just an application of bunch of expressions without a context and a delivery of logic. In my opinion, it does not teach the one anything, but just gives things to memorize.

  • @mmshilleh
    @mmshilleh 3 роки тому

    Is there no need to normalize?

  • @channelforstream6196
    @channelforstream6196 4 роки тому

    Best Explanation

  • @songsbyharsha
    @songsbyharsha 4 роки тому

    Perfect!

  • @heyitsme5408
    @heyitsme5408 4 роки тому

    👍

  • @mdazizulislam9653
    @mdazizulislam9653 4 роки тому

    Thanks for your very clear explanation. For more examples on d-separation see this ua-cam.com/video/yDs_q6jKHb0/v-deo.html

  • @typebin
    @typebin 4 роки тому

    sound volume is too small.

  • @ruydiaz7196
    @ruydiaz7196 5 років тому

    Is this really MLE? Or is it MAP? 'XD

  • @ruydiaz7196
    @ruydiaz7196 5 років тому

    Perfect!

  • @mavericktutorial4005
    @mavericktutorial4005 5 років тому

    Really appreciate it.

  • @shreyarora771
    @shreyarora771 5 років тому

    Shouldn't the score of Alpha A1 at @11:00 be decreased and alpha B1 be increased since B is the right class?

  • @searcher94fly
    @searcher94fly 6 років тому

    Hi, at 4:17 didn't you do a switcheroo of the formula? Like instead of P(x,y) = P(x)P(y|x), it should've been P(x,y) = P(y)P(x,y) ? From what I hear in the video, this is the way you explained.

    • @tubesteaknyouri
      @tubesteaknyouri 4 роки тому

      P(y|x)P(x) = P(x|y)P(y) because both are equal to P(x,y). See below: P(x|y) = P(x,y)/P(y) P(x,y)P(y) = P(x,y) P(y|x) = P(x,y)/P(x) P(y|x)P(x) = P(x,y) P(y|x)P(x) = P(x|y)P(y)

    • @Neonb88
      @Neonb88 3 роки тому

      @@tubesteaknyouri And he did that so you get Bayes' Rule out of it. It wasn't just for the heck of it

  • @nuevecuervos
    @nuevecuervos 6 років тому

    The content here was extremely helpful, but the audio was really poor. Still, I wouldn't have figured this out without this particular video, so thank you!

  • @kudakwashemushaike7692
    @kudakwashemushaike7692 6 років тому

    *for first question 2(-1) + -2(2) = -6 not -2

  • @samcarpentier
    @samcarpentier 6 років тому

    By far the most efficient source of information about this topic I could find anywhere on the internet

    • @oguzguneren4874
      @oguzguneren4874 8 місяців тому

      After 5 years, its still the only one on whole internet

  • @ryanschachte1907
    @ryanschachte1907 6 років тому

    This was great!

  • @Mokodokococo
    @Mokodokococo 6 років тому

    Hey sorry but I don't get why we sample whereas we already have the true distribution... I don't see how it can be useful... Does anyone have an explaination please :)

  • @michaelhsiu115
    @michaelhsiu115 6 років тому

    Great explanation!!! Thank you!

  • @qwosters
    @qwosters 6 років тому

    Dude I love you all for posting these lectures but this is a 75 mins one on how to multiply two numbers together. Soooo painful :) <3

  • @qbert65536
    @qbert65536 7 років тому

    Really got a lot out of this thank you!

  • @terng_gio
    @terng_gio 7 років тому

    How do you calculate the update weight? Could you provide an example to calculate it?

  • @dissdad8744
    @dissdad8744 7 років тому

    Unfortunately the explanation of calculating entropy and information gain is very unintuitive.

  • @hansen1101
    @hansen1101 8 років тому

    concerning ex. 2f: isn't the largest factor generated 2^4? because the join on all factors containing T generates a table over 4 variables (say f2') of which one is summed out to get f2. so f2' has size 2^4

    • @user-ze4qq8mm1q
      @user-ze4qq8mm1q 5 років тому

      this is a good thought, but the given observation value of +z is a constant, not a variable, so although it is contained in f2(U, V, W, +z) the only variables of f2 are U, V, W, hence 2^3 = 8 .

  • @zaman866
    @zaman866 8 років тому

    Thanks for video. I am just wondering how we normalize to sum to 1 in part g. Can you give any numerical example? Thanks

    • @hansen1101
      @hansen1101 8 років тому

      +Zs Sj assume f5 gives you a vector with 2 entries for +y and -y, say [1/5, 3/5]. to normalize this vector simply divide each coordinate by the sum of all coordinates [1/5 * 5/4 , 3/5 * 5/4] = [1/4, 3/4]

    • @zaman866
      @zaman866 8 років тому

      Thanks

    • @zaman866
      @zaman866 8 років тому

      hansen1101 do you know why we should normalize this and how this became non-normalized one?

    • @hansen1101
      @hansen1101 8 років тому

      +Zs Sj in this particular case you are calculating a distribution of the form P(Q|e) where e is an instantiation of some evidence variables. By definition this form has to sum to 1 over all instances of the query variable Q (i.e. P(q1|e) + P(q2|e) = 1 in the binary case). Be careful, there are queries of other forms that need not sum to 1 and therefore normalization is not necessary (i.e P(Q,e) or P(e|Q)). This became non normalized after applying Baye's rule and only working with the term in the numerator, leavin out the joint prob. over the instantiated evidence vars in the denominator. Therefore you'll have to rescale in the end.

  • @vedhasp
    @vedhasp 8 років тому

    Can anybody please explain results on the slide at 1:05:11 for the given probability tables?

    • @vedhasp
      @vedhasp 8 років тому

      +sahdeV ok I got it... Observation we have is +u and not -u. So there are 4 ways in which +u is possible. Rain, Rain, Umbrella or TT-U Sun Rain Umbrella or FT-U Sun Sun Umbrella or FF-U Rain Sun Umbrella or TF-U Probability of each is respectively: 0.5*0.7*0.9 0.5*0.3*0.9 0.5*0.7*0.2 0.5*0.3*0.2 T-U probability is therefore 63+27/(63+27+14+6)=0.818 F-U probability is 14+6/(63+27+14+6)-0.182 *************** For the next stage, time based update alone gives us probabilities as B'(T)=0.818*0.7+0.182*0.3=0.6272 B'(F)=0.818*0.3+0.182*0.7=0.3728 observation (u+) based updates give us B(T)=0.6272*0.9/(0.6272*0.9+0.3728*0.2)=0.883 B(F)=0.3728*0.2/(0.6272*0.9+0.3728*0.2)=0.117

    • @ilyaskarimov175
      @ilyaskarimov175 5 років тому

      @@vedhasp Thank you very much.

  • @WahranRai
    @WahranRai 8 років тому

    Audio not goooooooood

  • @WahranRai
    @WahranRai 9 років тому

    1:17:14 It is bad example for LCV !!! This case never happens because MRV heuristic will color SA with blue (one color left !!!)

  • @Scientity
    @Scientity 9 років тому

    This is very helpful! Thank you