Monte Carlo Methods : Data Science Basics

Поділитися
Вставка
  • Опубліковано 14 тра 2024
  • Solving complex problems using simulations
    0:00 Easy Example
    4:50 Harder Example
    13:32 Pros and Cons of MC

КОМЕНТАРІ • 164

  • @aaryadeshpande1621
    @aaryadeshpande1621 2 місяці тому +7

    Unbelievable, every second of this 19-minute, 13-second video was characterized by a very clear sequence of sentences that meshed together, without any mistake, to form a very crafted, well-understood explanation of all the ideas together. For instance, what makes the difference between most who explain and those like you is found here at 15:38: "Because when I'm in this loop, this is going to run for a long time when p is large, because it's barely ever gonna see two loses in a row." I was following along but it didn't completely click in a concrete sense until you grounded the logic with saying "because it's barely ever gonna see two loses in a row." You founded all explanations with some set of "extra words" that made everything you say click with minimal rewinding and mental pounding to understand them, and I find that two types of teachers/explainers/communicators exist in this world, those who consistently don't employ those extra words and those who consistently do incorporate them. I might nickname that feature the "connecting back to concrete-land" or "those extra few click words" whose presence or absence in speech make the difference between it clicking to an audience and it not. And the absence of such extra words, when multiplied over the many chunks of speech to hundreds of someone's utterances over a video or lecture, can culminate to a superficial understanding where you are left in abstract land but have an itchy, uneven feeling of why the circle around the explanation didn't make a complete closure of concreteness or a filling understanding. Those extra words, the attention to detail, and ensuring your learners really following along all the way to the end and closure of the circle of the explanation you intend to communicate really makes the difference in reaching a wider audience and making critical concepts well-understood. Thank you for the video!!

  • @stephenpuryear
    @stephenpuryear 2 роки тому +17

    Another cool thing about your approach is that your "toy example" follows closely from Baye's original paper in which discussed tossing balls onto a flat surface and then locating an unknown point more and more closely. Thank again for these great videos!

  • @alihaghverdi9893
    @alihaghverdi9893 2 роки тому +6

    Thank you for explaining the method as simply as possible. AWESOME!

  • @stephenpuryear
    @stephenpuryear 2 роки тому +22

    Superb! What is so remarkable in your presentation is the balance that you bring to the challenge. Each approach has strengths and weaknesses and it is essential to keep those in mind at the start. We are currently weathering a huge storm in which someone advocates for "their" method and says explicitly (or strongly hints) that every other approach is garbage or "not science". That makes your channel a big breath of fresh air! Thank you!

  • @sophiasha9680
    @sophiasha9680 2 роки тому +2

    love all the videos I've watched so far! very easy for beginners to follow and learn!

  • @anthnyalxndr
    @anthnyalxndr 2 роки тому

    Thanks for taking the time to put this together 🤝

  • @minhaoling3056
    @minhaoling3056 2 роки тому +5

    Your videos are so clear and simple and I believe you will be the top youtube in data science concepts in the near future. One good suggestion will be to provide a snapshot of the things you wrote on board.

  • @pectenmaximus231
    @pectenmaximus231 10 місяців тому

    Wonderful presentation. Ticks all the boxes. Look forward to watching your other videos.

  • @meg7617
    @meg7617 2 роки тому

    Brilliant effort. You channel has a very interesting and rich content. Thank you!

  • @eruditecognitions
    @eruditecognitions 2 роки тому

    You're a godsend! Thank you for explaining this so well!

  • @spicytuna08
    @spicytuna08 Рік тому

    again, thanks for the splendid easy to understand explanation.
    coming up with a recursive formulae seems very challenging.

  • @johnlourdusamy8002
    @johnlourdusamy8002 3 роки тому +17

    I am not an expert, but I enjoyed the dart and circle/square explanation. You should start something like khan academy.

  • @zombieboobuu9233
    @zombieboobuu9233 2 роки тому +1

    Another helpful video! Thank you for your hard work!

  • @-0164-
    @-0164- Рік тому

    Thank you so much! :) , you offer nice, easy to follow explanations and different perspectives for important data science concepts.

  • @Alex-dx2lp
    @Alex-dx2lp 9 місяців тому

    Very approachable and informative video! Thank you for this!

  • @Reach41
    @Reach41 3 роки тому +2

    I've wondered about the Monte Carlo method for a long time, but never needed to figure out what it was. Now I know what it is, and see some uses in my robotics hobby. Thanks!

  • @nemuchan200
    @nemuchan200 2 роки тому

    Very neat explanation! Thank you!

  • @TC-yt2ug
    @TC-yt2ug 3 роки тому +3

    Very nice intuition of the concept, thank you! Would you mind sharing what your background is a bit? I'm studying artificial intelligence at uni, and we often get very hands-on courses which lack some deeper understanding of why we're doing things, and how things really work. Would you have any suggestion on what to do in order to build a base knowledge that would allow for a deeper understanding of things? I know it's not an easy question, but surely any insight will be useful!

  • @eliflale8680
    @eliflale8680 Рік тому

    I have understood it perfectly. Thanks for the video.

  • @user-hc8qv4ec2c
    @user-hc8qv4ec2c 2 місяці тому

    Excellent explanation, thank you so much!

  • @alibaba888
    @alibaba888 2 роки тому

    Wonderful video, just what I needed.
    - Has exampleS (1

  • @user-iw9zq5pi5y
    @user-iw9zq5pi5y 3 місяці тому

    Superb! Thank you for the great video!!!

  • @nikhilshingadiya7798
    @nikhilshingadiya7798 2 роки тому

    I am really happy with you and eric grimson who taught me lots of maths

  • @JonathanWaltersDrDub
    @JonathanWaltersDrDub 2 роки тому

    Your second example immediately reminded me of the negative binomial distribution. =)

  • @nicololucchesi9028
    @nicololucchesi9028 3 роки тому +1

    Very cool analytical solution to the dazzling problem you proposed, congrats on the video!

  • @NinjaAdorable
    @NinjaAdorable 3 роки тому +30

    I use MC for some fabrication process variation simulations in my research, and I must say this is one of the better explanations for the simulation. Good job.
    Couple of things:
    1) The rules, emerge from understanding the problem. In my case the behavior of the material, possible probability distribution of faults (from case studies of real fabrication imperfections). You use MC when you know what the real world solution would look like but you need a more approachable, repeatable, or approximate approach to the solution (Eg: I cannot just fab the chip every time I want to see the impact of fabrication imperfections).
    2) not everything has an analytical solution, or more correctly saying, the solution is very complex, having to consider various factors to reach the "true" solution. It's in scenarios like this where the application of MC truly shines.

    • @ritvikmath
      @ritvikmath  3 роки тому +11

      I really appreciate this comment; shows there is always more to the story. I appreciate your perspective of using MC in your actual work. Thank you!

    • @ahmedibrahim-of8th
      @ahmedibrahim-of8th Рік тому +2

      Hey, may i get contact with you? I have some questions about monte carlo in IC fabrication process if you can help me

    • @kankanasaikia9156
      @kankanasaikia9156 10 місяців тому

      I need to do MC for one uncertainty analysis. But we have never done so. we don't have the software also. Can you please guide me regarding this.

  • @marijatosic217
    @marijatosic217 2 роки тому

    Amazing! Great job!

  • @learn5081
    @learn5081 3 роки тому

    great instructor! so clear

  • @putinimpotent2044
    @putinimpotent2044 Рік тому

    Top notch video. 👍

  • @NickKravitz
    @NickKravitz 3 роки тому +6

    Great video as always! Backgammon and poker best play situations cannot be solved via closed form and therefore are mostly solved by MC (named rollout and solver respectively) Now that you taught us MC you have to teach MCMC (Markov Chain Monte Carlo) next!

    • @ritvikmath
      @ritvikmath  3 роки тому +3

      MCMC vids scheduled to release in the coming weeks :)

  • @aryangandhi894
    @aryangandhi894 Рік тому

    Great video, helps a lot!

  • @banjotoothlessbill
    @banjotoothlessbill 2 роки тому

    Very interesting, thank you !

  • @vishal13230
    @vishal13230 3 роки тому +2

    You always make tough concepts super easy. Thank you ritvik

  • @prafful1723
    @prafful1723 3 роки тому +1

    Thanks a lot, explained in a lucid manner.

  • @zephyrsurfteam
    @zephyrsurfteam 3 роки тому +6

    Thanks. I enjoyed it! I would love to see a coding video on Monte Carlo simulations.

  • @JosephRivera517
    @JosephRivera517 3 роки тому +3

    This is so interesting. Thanks for this great lecture.

  • @khalidtaha3373
    @khalidtaha3373 3 роки тому

    Thanks a lot for this explanation

  • @juansb1509
    @juansb1509 3 роки тому +2

    Great video. In the first example (integration problems or similar), I've always wondered why sample randomly instead of doing a grid

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      Good question; a grid would definitely work too but it would have to be pretty fine grained. Otherwise, you might get close to the true answer but not exactly, even with millions of samples. And, as a grid gets more and more fine grained, you approach the random sampling shown in this video.

    • @juansb1509
      @juansb1509 3 роки тому

      @@ritvikmath Thanks!

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p 3 роки тому +3

      This is and old question, but anyway, as far as I understand it, making a grid has a steeper dependency on the number of dimensions on your problem - for a very regular 2D area like on the circle example it's probably just about as good, but for weird n-dimensional shapes building a grid becomes too expensive, while MC may still hold. Also there are concepts like importance sampling that give more of an edge to MC methods.
      The circle in a square example is good to get an intuition of the basic idea of the method, but you shouldn't judge the method base on it because it's not really a practical application.

  • @Mihairtr
    @Mihairtr 10 місяців тому

    I love it, ty, man!

  • @amanbagrecha
    @amanbagrecha 3 роки тому +1

    The ratio of likes vs dislikes tells the quality for your videos. Amaze ~

  • @karimelmokhtari9559
    @karimelmokhtari9559 3 роки тому +3

    Awesome, I loved this video ❤❤

  • @moimonalisa5129
    @moimonalisa5129 2 роки тому +1

    I'm weak at probability but the example is realistic and clearly explain both the deterministic way and the numeric way. I've tried for other values of p and it works. Thank you for make this video.

  • @rhodium8505
    @rhodium8505 Рік тому

    amazing. thank you

  • @TomJones-yp1qh
    @TomJones-yp1qh Рік тому

    Great, easy to understand video! What environment would this code be written in? R? Python?

  • @ANJA-mj1to
    @ANJA-mj1to 8 місяців тому

    THANKS !
    Despite the many desierprperties of the aforementioned methodolist, the Monte Carlo method is still the most general and reliable sgohasgic method

  • @stevetrabajo4065
    @stevetrabajo4065 3 роки тому +1

    Thanks Ritvik. im a fan of your channel now

  • @GodfroySir
    @GodfroySir 3 роки тому +1

    Thank you for the vidéo. Don't forger that you'd also need à good random number generator.

  • @haluk22
    @haluk22 3 роки тому +3

    if u also iterate the p values from 0 to 1, you may have the interpration, at least the graph similiar to the closed form's. Aggree that it takes some more time😊

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      for sure, you might get a strong approximation to the true function!

  • @gautambadri9631
    @gautambadri9631 Рік тому

    Hello sir,
    Your video is really enlightening, can you make a video on Quasi-Monte Carlo

  • @jondo7680
    @jondo7680 Рік тому

    I paused the video and coded (on my phone), with p being 50/50. I got 13 which I didn't expect. Than you said 6 and I had a feeling. 6 is nearly the half. So I checked the sign and yes I used

  • @jiayiwu4101
    @jiayiwu4101 3 роки тому

    Thanks for your great video! For the pi calculation one, may I ask what is the difference between MCMC and rejection sampling? I think MCMC also throws some points outside the circle. The only difference I could think about is rejection sampling we got some samples and MCMC is calculating some numbers based on those samples.

    • @jiayiwu4101
      @jiayiwu4101 3 роки тому

      Sorry, I mean Monte Carlo simulation, not MCMC

  • @Ivanbetancxurt
    @Ivanbetancxurt 9 місяців тому

    sick vid brah

  • @jimlu9555
    @jimlu9555 11 місяців тому

    Superb, For your second simulation, why generate a uniform distribution (0,1)

  • @Mars.2024
    @Mars.2024 День тому

    Thanks a million: the way you make any complex concept so simple and intuitive🎉 would you pls make a video on the proof of belman equation (action_value and state_value belman equation)? Cause It's the basic of reinforcement learning

  • @user-qt4gy2de4n
    @user-qt4gy2de4n 3 місяці тому

    Great!

  • @jhondavidson2049
    @jhondavidson2049 4 місяці тому

    thank you !!

  • @joshl.8950
    @joshl.8950 2 роки тому +1

    One question. I think you said that the Monte Carlo is good for situations where you'd perhaps "not know geometry basics". However, at 4:10 , are we not using geometry basics to reason this algorithm?

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    Great video!

  • @Prashanth-yn9zd
    @Prashanth-yn9zd Місяць тому

    if p is higher, then rand() < p is more often true and that means nloss is more likely to reach 2 faster than the case where p is lower. so if p is higher, the while loop will end faster right.

  • @KhaledAmrouni
    @KhaledAmrouni 3 роки тому +3

    # Example of How many dots will fall randomely inside a circle in python (Monte Carlo Method)
    # Required Packages
    import numpy as np
    import pandas as pd
    import matplotlib.pyplot as plt
    import seaborn as sns
    ###-------------------------------------------------------
    # Lists of xs,ys and their squred values that fall inside the circle
    xs=[]
    ys=[]
    xsq=[]
    ysq=[]
    # Lists of xs,ys and their squred values that fall inside the whole square
    xs1=[]
    ys1=[]
    xsq1=[]
    ysq1=[]
    #n, circle= 10**3,0
    n = 10**3
    circle = 0 # the number of the dots that will fall inside the circle
    for i in range(n):
    # to get float random values between (-1,1) use uniform (1-,1)
    x=np.random.uniform(-1,1)
    y=np.random.uniform(-1,1)
    #x=np.random.random()
    #y=np.random.random()
    xs1.append(x)
    ys1.append(y)
    xsq1.append(float(x**2))
    ysq1.append(float(y**2))
    if float(x**2)+ float(y**2)

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      Wow thanks for providing this code!

    • @KhaledAmrouni
      @KhaledAmrouni 3 роки тому +1

      @@ritvikmath You are very welcome! Your videos are so awesome that pushed me to code their logic.

  • @monmorelord6368
    @monmorelord6368 3 роки тому +1

    great video ...thanks

  • @matthewchunk3689
    @matthewchunk3689 3 роки тому +1

    Thank you!

  • @davidklotz9962
    @davidklotz9962 3 роки тому +1

    Thanks for this explanation! I was wondering though... the formula and the graph given for the second example don‘t seem to fit, do they?

    • @ritvikmath
      @ritvikmath  3 роки тому

      Thanks! Can you explain further about the formula and graph? I want to make sure to leave an update if there is indeed a mistake.

    • @davidklotz9962
      @davidklotz9962 3 роки тому

      Sorry, my mistake! All good with the equation and graph. Thanks again for the great video!

  • @IbraheemALBalushi
    @IbraheemALBalushi Рік тому

    Thanks

  • @miguelcosta9461
    @miguelcosta9461 3 місяці тому

    My first time seeing this channel. But I feel like I know this guy from television or something. Is he famous for something?

  • @rodeketan
    @rodeketan 3 місяці тому

    Great

  • @rahulgautam6922
    @rahulgautam6922 3 роки тому +1

    Hi ritvik great video, I had one doubt regarding code for the 2 consecutive losses simulation, with just checking for the counter for losses !=2 aren’t we just considering cases of not more than 2 losses in the whole experiment instead of checking for not more than 2 “consecutive” losses?

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      The nloss=0 line is like a reset of the current running losses to 0 if we get a win. Good question!

    • @rahulgautam6922
      @rahulgautam6922 3 роки тому

      @@ritvikmath ahh I see, didn’t catch it, thanks for clearing. Cheers!!

  • @jebberwocky
    @jebberwocky Рік тому

    according to ChatGPT
    So given a winning probability x, the expected round of the game that ends when the player loses twice in a row is 1/x + 1/(x^2)

  • @qianlingpan6773
    @qianlingpan6773 3 роки тому

    hi , can share on how monte carlo is used in financial risk control?

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому +1

    What inspired you to use such an example as the one you have for example 2? Can you think of any real world application? The one comes to mind is like for a product before it breaks down, so you can determine how much is the cost of warranty.

    • @ritvikmath
      @ritvikmath  3 роки тому

      This is a great question: why is this useful? Indeed, your example is great. Another might be if you're looking for a job and you know the probability of getting a job is p, then on average, how many interviews do you need to go through before getting a job.

  • @fellygraytv1551
    @fellygraytv1551 3 роки тому

    Hello rivik thanks for the video, can you clarify why you used if random number is less than probability of winning ( 0.5 if we use that), shouldn't it be if random number is greater than or equal to the probability of wining. Why should the number be lower and not higher than the probability of winning ( for second example)

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      Good question, I get confused by that sometimes too. Think of it like this; if the probability of winning is high like 0.9, and then you generate a random number between 0 and 1, you want to check if it is less than 0.9 because that will happen with a 90% chance. On the other hand, if the prob of winning is low like 0.1, then you want to check if the random number is less than 0.1 since that happens with a 10% chance.

  • @enchantularity
    @enchantularity 5 місяців тому

    Will you please explain to me ghe solution to find the average length of a line segment inside a square of side a. ??

  • @ankitsekseria6487
    @ankitsekseria6487 2 роки тому +1

    @ritvikmath At 11:05 if rand(0,1) is less than the probability of a win (p) then it means we lost the round and "nloss" must be incremented by 1. Am I missing something?

    • @gokulakrishnancandassamy4995
      @gokulakrishnancandassamy4995 10 місяців тому

      I can see what is causing this confusion. Assume that the RAND() function returns a number in [0,1] (uniformly distributed) at random. Say the probability of winning is, p = 0.9. In such a case, we will actually be playing a large number of rounds before losing. On the other hand, if p = 0.1, our game won't last that long as we might encounter two failures much earlier.
      So, in the first case, to simulate the abovementioned behavior, whenever RAND() returns a number less than 0.9, we will consider it as a success.
      Hope this clarifies!

  • @youtubeplaylist4072
    @youtubeplaylist4072 2 роки тому

    nice video

  • @apachelee884
    @apachelee884 2 роки тому

    May i ask a qns? Except Monte Carlo, is there any alternative method to do same things as MC?

  • @rayyanmostapha2061
    @rayyanmostapha2061 11 місяців тому

    why when rand()

  • @nitros7725
    @nitros7725 Рік тому

    Great video! For the algo

  • @user-uz4ip3sy9w
    @user-uz4ip3sy9w 8 днів тому

    What happens if the lose condition is if I have a head followed by a tail (p=1/2)/ I obtained the average number of rounds e=4.

  • @shonendumm
    @shonendumm Рік тому +2

    I can't grasp how you came up with the recursive equation for the expected number of rounds. I understand that the expected value is the total sum of (expected outcome value * probability of each outcome) but I can't move forward from that. I wish I can understand your reasoning for the recursive equation. Could you explain or perhaps do a video on that?

    • @shonendumm
      @shonendumm Рік тому +1

      I think it's the recursive part that's hard to understand, e.g. (1 + e)*p

    • @lbognini
      @lbognini 2 місяці тому

      I'll explain it the way I understood it 😉
      What really renews our expectation is a success on a draw (round). We get kind of a refill (since the count of faillures is reset) then we capitalize on the rounds till the success and we average (weighted average) that with the expected number of rounds for the rest of the row. Let's call the later e'.
      As he explained, we have 3 cases.
      Two cases to success:
      1- Succes in the first place (round).
      2- Failure then success
      With the third case (two consecutive failures), there's no remaining expectation since the row ends (e'=0) .
      3- Failure then Failure.
      So for:
      1- We capitalize on 1 round with probability p and also get the remaining expected number of rounds e' with the same probability p (Yeah, it is the condition for getting this e').
      So the updated expected value (weighted average) for this case is p*1 + p*e' = (1+e')*p
      2- We capitalize on 2 rounds (one round for failure and one for success) with probability (1-p )*p. So the weighted average is (1-p )*p*2 + (1-p )*p*e" = (2+e") * (1-p)*p. Note that this time we call the remaining expected number e".
      3- For the third case, we know we have 2 rounds with probability (1-p)*(1-p) and nothing after that (end of row). So the weighted average is 2*(1-p )* (1-p )
      Now, the definition of expectation always consider limits when numbers (draws) approche infinity. In that case e' and e" will all converge to the same number e. We can then replace e' and e" with e. Hence the recursive formula.

  • @chandrasekarank8583
    @chandrasekarank8583 3 роки тому

    What a coincidence I just saw the same concept in the form of problem in joma tech channel where he wants us to find the value of pi which he tells its his one of his favourite interview problem .
    Do check it out guys

  • @its8524
    @its8524 3 роки тому

    Hi Ritvik, In the second example, you mentioned that simulation time increases with the increase in value of p. I think the code anyways has to run all the lines for max 1M times, then how the simulation time is changing..given we are not adding any new operation.

    • @its8524
      @its8524 3 роки тому

      For Average, I agree that code has to work based on the number of values coming in Rounds array. Would MC time remain the same..

    • @ritvikmath
      @ritvikmath  3 роки тому +3

      Hey that's a good question. Indeed, we always do 1M rounds, but when the probability of winning a round is very very high (around 1), then each of the 1M games will last many many rounds since we need to see 2 losses in a row to move on to the next game. However, if the probability of winning a round is very very low (around 0), then almost surely the game will end in 2 rounds so our simulation runs much faster.

    • @Manishsingh-dl6ho
      @Manishsingh-dl6ho 3 роки тому

      @@ritvikmath in that case we can rephrase the problem for failure case

    • @its8524
      @its8524 3 роки тому +1

      @@ritvikmath Got it... Thanks for the explanation

    • @ritvikmath
      @ritvikmath  3 роки тому

      @@Manishsingh-dl6ho That's true, a good suggestion!

  • @queenisforever1
    @queenisforever1 3 роки тому +1

    Thanks. I came across your channel only recently and loved it. I want to learn how to code, could you please advise how to go about it.

    • @ritvikmath
      @ritvikmath  3 роки тому +1

      Great suggestion! I'm dropping some "Code With Me" videos in the coming weeks. (First one is dropping tomorrow!)

    • @queenisforever1
      @queenisforever1 3 роки тому

      @@ritvikmath Thanks!

  • @gagepatterson4770
    @gagepatterson4770 3 місяці тому

    I think you could have just said the probability of losing 2x in a row is p^2, so is (1-p^2) probability of not losing 2x in a row. So 3 games out of every 4, but since the set was split, normalizing each set would give 6 games. Now I would want to run the simulation on both varying odds of losing and varying number of times in a row. I.e. 3x in a row w/ 50% w/l, id guess the EV would be 21?

  • @jaysakarvadia3527
    @jaysakarvadia3527 Рік тому +1

    I am having a little bit of trouble understanding 3:45. The x^2+y^2

  • @antoniocrealsousa4596
    @antoniocrealsousa4596 6 місяців тому

    Hello. I belive that the expected value of rounds, E(r), that you calculated analiticly is wrong. I think that the number of rounds until you get second (inclusive) "loose" in a row is a Negative Binomial with r=2 [Y-NB(r=2, p)] and the number of rounds equals Y+2. As Y+2 is a linear combination of Y then E(Y+2)=E(Y)+2. As a Y is Negative Binomial then E(Y)=r*p/(1-p) and E(Y+2)=2+r*p/(1-p). So, for example for p=0.3, the expected number of rounds would be number E(Y+2)=2+2*0.3/0.7=2.857 and for p=0.5, E(Y+2)=4.000. Thanks.

  • @user-er9zc8ni1s
    @user-er9zc8ni1s 6 місяців тому

    so tell me how u play one time in @8.13. You cant play one time. maxmim number of times you can play is two. so probability of playing one time is 0?/

  • @404username
    @404username 3 роки тому

    hi, could you possibly show me how to solve for loosing 4 times in a row?
    Thanks !

  • @angstrom1058
    @angstrom1058 Рік тому

    Monte Carlo & Stochastic... fancy words for "Random"

  • @vikranthkanumuru8900
    @vikranthkanumuru8900 Рік тому

    Didn't we assume that we don't anything about the geometry of the circle? How can we justify saying x2+y2

  • @sivuyilesifuba
    @sivuyilesifuba 3 місяці тому

  • @user-cy9zf4oz2i
    @user-cy9zf4oz2i 3 місяці тому

    why it sounds like markov chains

  • @stevengusenius7333
    @stevengusenius7333 3 роки тому

    I don't disagree with any of your comments, but I feel like you might be over-emphasizing the detractors.
    I would argue that there plenty of problems that cannot be solved using first principles. For example, analysis of circuit performance with ideal components is easy enough, but if you want the circuit to contain realistic components (values that follow distibutions) it may be too impractical to solve. This is because every component value can have interactions with every other value. Additionally, closed form solutions tend not to handle discontinuities well. For such problems MC are the bomb.
    Also, if you highlight that interpretability of an MC solution is an issue, I think it is only fair to acknowledge that the interpretability of the methodology is a credit. One of MC's strengths is that it tends not to hide its assumptions. If you are modeling a physical process, it is generally easy to see how each step in the process was simulated. In an peer review environment, this can save a lot bickering. In this same vain, MC can also be quick to modify in response to criticism. Closed form solutions... Not so much.
    In your 2nd example you returned the mean, which is what the problem asked for, but you automatically get distribution data, too. Add a bootstrap and you have a confidence estimate. If your MC has multiple inputs, you can wrap it and perform a sensitivity analysis. This buys back some of what was lost buy not having an tidy equation.
    Given the ease vs. power ratio, I wish engineers would learn MC as one of their basic tools, but this has not been my experience.
    My comments aside, thanks for covering this.

    • @ritvikmath
      @ritvikmath  3 роки тому

      Hey, I really appreciate all your feedback! I'm also a huge believer in applying MC to problems if possible, but do feel like even in the realm of MC it is important to understand some of the first principles since it could inform why your simulation is taking a long time to run among other things. I think your comments on MC use in engineering is very valuable to others since personally I didn't study engineering. I also think your comment on getting a full distribution with MC is brilliant and is something I wish I'd have highlighted if I was able to go back and re-make this video. Thanks!

  • @cylurian
    @cylurian Рік тому

    Question would be, I'm guessing aiming at the center wouldn't be part of this simulation.

  • @sanawarhussain
    @sanawarhussain 5 місяців тому

    but if P is close to 1, why do we need a simulation to begin with ? o.O

  • @csabaszekely3098
    @csabaszekely3098 Рік тому

    Just use monte carlo when you need to multiply two confidence intervals, or add up to confidence intervals with non-normal distribution. This is what it is developed for during the Manhattan Project in the 40's and frankly I don't see the point in forcing it onto anything else.

  • @capsbr2100
    @capsbr2100 2 роки тому

    Cannot read the board.

  • @edmartin6245
    @edmartin6245 24 дні тому

    A MODEL IS JUST YOUR FORMULA. MONTE CARLO IS JUST PUTTING RANDOM NUMBERS INTO YOUR MODEL. ADDING RANDOM NUMBERS
    IS JUST TO TEST YOUR FORMULA(MODEL). USING RANDOM NUMBERS TO SEE HOW UNPREDICTABLE(RANDOM EVENTS) EFFECT YOUR
    MODEL(FORMULA). JUST VERY SIMPLE ALGEBRA.

  • @nadewuyi
    @nadewuyi Місяць тому

    The only flaw in this presentation is forgetting to mention the coding program you used; Matlab or Python or something else? does anyone know what program he used? he forgot to mention that.

    • @ritvikmath
      @ritvikmath  Місяць тому +1

      python was used

    • @nadewuyi
      @nadewuyi Місяць тому

      @@ritvikmath thank you

  • @marcuslerch2413
    @marcuslerch2413 Рік тому

    Pure , very clear , simple and easy to understand type of explanation , to the point expiration
    Overall best video for understand the concept
    Thanks sir for providing such a great video

  • @malinyamato2291
    @malinyamato2291 Рік тому

    thank you for an excellent and eays to understand explanation of MC and its limitations. MC is the layzy guys' way to get the number.....LOL