You've been using the Wrong Random Numbers! - Monte Carlo Simulations

Поділитися
Вставка
  • Опубліковано 10 тра 2022
  • In this tutorial we discuss Monte Carlo convergence and the difference between Pseudo-random numbers and Quasi-random numbers. In previous tutorials will discusses the benefits of combining Monte Carlo Variance Reduction techniques such as antithetic and control variate methods to reduce the standard error of our simulation.
    We demonstrate the effectiveness of using quasi-random numbers by compaing the convergence on a pricing a European Call Option by monte carlo simulation using difference methods for creating pseudo and quasi-random variables.
    Pseudo-random number generation:
    - add 12 uniform variables
    - Box-Muller
    - Polar Rejection
    - Inverse transform sampling (like Numpy)
    Quasi-random number generation:
    - Halton
    - Sobol
    Turns out, pseudo random numbers are a bad choice for Monte Carlo simulation. Let's consider pairs of independent uniformally distributed random numbers. Since numbers are independent and uniformly distributed, every point on the graph is equally likely. However we observe clumps and empty spaces. Eventually if we sampled enough points, the initial clumps and empty spaces would be swamped by the large number of points spread evenly.
    Unfortunately, with Monte Carlo simulation, the aim is to often reduce the number of samples to decrease computation time (as has been the aim of Variance Reduction Techniques).
    Pseudo-random numbers introduce bias through the clumpiness!
    In contrast, Quasi-random numbers or low-discrepency sequences are designed to appear random but not clumpy. Quasi-random samples are not independent from the previous one, it 'remembers' the previous samples and attempts to position itself away from other samples. The behaviour is ideal for obtaining fast convergence in a Monte Carlo simulation. We show Halton and Sobol, because these are implemented in Scipy!
    ★ ★ Code Available on GitHub ★ ★
    GitHub: github.com/TheQuantPy
    Specific Tutorial Link: github.com/TheQuantPy/youtube...
    ★ ★ QuantPy GitHub ★ ★
    Collection of resources used on QuantPy UA-cam channel. github.com/thequantpy
    ★ ★ Discord Community ★ ★
    Join a small niche community of like-minded quants on discord. / discord
    ★ ★ Support our Patreon Community ★ ★
    Get access to Jupyter Notebooks that can run in the browser without downloading python.
    / quantpy
    ★ ★ ThetaData API ★ ★
    ThetaData's API provides both realtime and historical options data for end-of-day, and intraday trades and quotes. Use coupon 'QPY1' to receive 20% off on your first month.
    www.thetadata.net/
    ★ ★ Online Quant Tutorials ★ ★
    WEBSITE: quantpy.com.au
    ★ ★ Contact Us ★ ★
    EMAIL: pythonforquants@gmail.com
    Disclaimer: All ideas, opinions, recommendations and/or forecasts, expressed or implied in this content, are for informational and educational purposes only and should not be construed as financial product advice or an inducement or instruction to invest, trade, and/or speculate in the markets. Any action or refraining from action; investments, trades, and/or speculations made in light of the ideas, opinions, and/or forecasts, expressed or implied in this content, are committed at your own risk an consequence, financial or otherwise. As an affiliate of ThetaData, QuantPy Pty Ltd is compensated for any purchases made through the link provided in this description.

КОМЕНТАРІ • 20

  • @QuantPy
    @QuantPy  2 роки тому +5

    Let me know what your thoughts are about using Pseudo vs Quasi random numbers?
    Is it worth the effort compared to using np.random.normal() 🤔

  • @TimoFriedl
    @TimoFriedl 2 роки тому +1

    Thanks a lot for this high quality video

  • @Alexander-pk1tu
    @Alexander-pk1tu Рік тому

    thank you for your very informative video

  • @zebmason6530
    @zebmason6530 Рік тому

    Very interesting. Got me worried for a minute until I remembered that at the start of the year I was using uniformly distributed random numbers in a Fisher-Yates shuffle (for an epidemic simulation).

  • @kevinshen3221
    @kevinshen3221 2 роки тому

    never really know the random generator can fall short! thanks for this vid

  • @vladk9152
    @vladk9152 2 роки тому +1

    I'll try implementing this in my equity curve simulation

  • @kilocesar
    @kilocesar 2 місяці тому

    Very good

  • @arcade-fighter
    @arcade-fighter 2 місяці тому

    Congrats for the video!! This topic is really unknown inthe quant community. Myself I used to apply QMC on my Master Thesis, via Halton, Sobol sequences and others as you mentioned here. With QMC we can push forward the "curse of dimensionality" to converge faster than raw MC. In my case I did a lot of experiments to support the QMC goodness thesis, working out valuations of exotic options (with no easy analytical solution) such as Spread and Lookback Stock Options.

  • @ghostwhowalks5623
    @ghostwhowalks5623 Рік тому

    this is great! How would you sample from Halton repeatedly and get different numbers? For eg in Matlab, I can change randn(i, 1:5) and loop through i. Not sure how to do it for the Halton sequence....

  • @var7397
    @var7397 2 роки тому

    Thanks from Ukraine!
    You inspired me 🙂

  • @maxhohenstein4554
    @maxhohenstein4554 2 роки тому

    How would you recommend a newby to learn python ?

  • @marcoesteves4367
    @marcoesteves4367 2 роки тому

    I tried this approach in R with quasi random numbers but got put and calls values very far from market values.

    • @QuantPy
      @QuantPy  2 роки тому +1

      Did you convert the quasi random numbers to normally distributed numbers

    • @marcoesteves4367
      @marcoesteves4367 2 роки тому +1

      @@QuantPy Spot on! I forgot it! Now it's ok.

  • @davidetrevi3918
    @davidetrevi3918 Рік тому

    Around the end of the video you defined the relative error as the difference between the approximation and the exact BS formula. Shouldn't you divide the exact value to get relative errors?

  • @chillydickie
    @chillydickie 2 роки тому

    Comment for youtube algorithm

  • @Drewww71
    @Drewww71 2 роки тому

    In my humble opinion Quasi numbers are not truly random if they are remembering the previous sequences therefore lead to bias

    • @djangoworldwide7925
      @djangoworldwide7925 2 роки тому +3

      I think he never intend to claim they're fully random, but rather claiming that they have their benefit in the MC simulation because of their dependency upon the previous sampling, ensuring even distance between each point