Brunei Math Club
Brunei Math Club
  • 455
  • 102 022
The sum of independent Gaussian random vectors is Gaussian.
We show that the sum of independent Gaussian random vectors is again a Gaussian vector. The proof is similar to the univariate Gaussian case but a bit more tedious due to the handling of vectors and matrices.
Subscribe:
www.youtube.com/@BruneiMathClub?sub_confirmation=1
Twitter:
BruneiMath
Переглядів: 53

Відео

Linear transformation of a Gaussian vector is Gaussian
Переглядів 852 місяці тому
We show that the linear transformation of a Gaussian random vector is again a Gaussian random vector. Subscribe: www.youtube.com/@BruneiMathClub?sub_confirmation=1 Twitter: BruneiMath
The sum of Gaussian random variables is Gaussian (univariate case)
Переглядів 662 місяці тому
We prove that the sum of Gaussian random variables is Gaussian for the univariate case. (The multivariate case will be presented in another video.) Subscribe: www.youtube.com/@BruneiMathClub?sub_confirmation=1 Twitter: BruneiMath
Kernel method and classification problem
Переглядів 672 місяці тому
We study how the kernel method can be applied to nonlinear classification problems. Subscribe: www.youtube.com/@BruneiMathClub?sub_confirmation=1 Twitter: BruneiMath
Proving a Representer Theorem
Переглядів 1082 місяці тому
Herein we prove a Representer Theorem, which gives a justification for applying the kernel method to regression problems in general. Subscribe: www.youtube.com/@BruneiMathClub?sub_confirmation=1 Twitter: BruneiMath
Applying the kernel method to polynomial regression
Переглядів 582 місяці тому
By appropriately choosing a kernel function, we can readily solve the polynomial regression problem. Subscribe: www.youtube.com/@BruneiMathClub?sub_confirmation=1 Twitter: BruneiMath
Kernel method for regression: The basic idea
Переглядів 662 місяці тому
By choosing some appropriate kernel function and the usual linear regression technique, we can solve nonlinear regression problems. The resulting approximating function is represented as a linear combination of reproducing kernels. Subscribe: www.youtube.com/@BruneiMathClub?sub_confirmation=1 Twitter: BruneiMath
From kernel function to reproducing kernel Hilbert space
Переглядів 733 місяці тому
Given a kernel function k(x,y), we can construct a unique reproducing kernel Hilbert space (the Moore-Aronszajn theorem). To do this, we first regard the bivariate kernel function as a parameterized univariate function k_{x}(y) = k(y,x), then consider a vector space of function spanned by those parametrized univariate (kernel) functions. We can introduce an inner product in this space, which in...
Examples of kernel functions
Переглядів 643 місяці тому
The function k: X×X → R(real numbers) is called a kernel function (on X) if it is symmetric and positive semi-definite. In this video, we show some examples of kernel functions and how to construct a kernel function from other functions. Subscribe: www.youtube.com/@BruneiMathClub?sub_confirmation=1 Twitter: BruneiMath
Riesz representation theorem
Переглядів 613 місяці тому
Given a Hilbert space H and an element x0 in H, we can define a map φ_{x0}(x) on H in terms of the inner product between x and x0. This map is linear and bounded (in some sense). The Riesz Representation Theorem claims that the converse is true. That is, any linear map (functional) in H can be uniquely represented as an inner product. As a consequence, function application (such as f(x)) can be...
Hilbert Projection Theorem
Переглядів 1103 місяці тому
Hilbert Projection Theorem
Abstract Hilbert space
Переглядів 463 місяці тому
Abstract Hilbert space
An example of the Hilbert space
Переглядів 6434 місяці тому
An example of the Hilbert space
Cauchy's integral formula and L^2 inner product
Переглядів 764 місяці тому
Cauchy's integral formula and L^2 inner product
Orthonormal basis in a function space.
Переглядів 844 місяці тому
Orthonormal basis in a function space.
L^2 inner product between functions
Переглядів 2504 місяці тому
L^2 inner product between functions
Laurent series vs. Fourier series
Переглядів 6784 місяці тому
Laurent series vs. Fourier series
Fourier series (a brief review)
Переглядів 704 місяці тому
Fourier series (a brief review)
Complex-valued exponential function
Переглядів 414 місяці тому
Complex-valued exponential function
Positive-definite matrices
Переглядів 855 місяців тому
Positive-definite matrices
Positive semi-definite matrices
Переглядів 2455 місяців тому
Positive semi-definite matrices
Symmetric matrices
Переглядів 1535 місяців тому
Symmetric matrices
Orthogonal Projection
Переглядів 395 місяців тому
Orthogonal Projection
Orthonormal basis of a vector space
Переглядів 325 місяців тому
Orthonormal basis of a vector space
Inner product in Linear Algebra
Переглядів 1385 місяців тому
Inner product in Linear Algebra
A continuous function on a closed interval is Riemann-integrable
Переглядів 1025 місяців тому
A continuous function on a closed interval is Riemann-integrable
A continuous function on a closed interval is uniformly continuous
Переглядів 1685 місяців тому
A continuous function on a closed interval is uniformly continuous
Marginal distributions of the multivariate Normal distribution are normal.
Переглядів 3025 місяців тому
Marginal distributions of the multivariate Normal distribution are normal.
Deriving the multivariate normal distribution from the maximum entropy principle
Переглядів 3115 місяців тому
Deriving the multivariate normal distribution from the maximum entropy principle
Maximum entropy and the normal distribution
Переглядів 6305 місяців тому
Maximum entropy and the normal distribution

КОМЕНТАРІ

  • @narimanmammadli7169
    @narimanmammadli7169 10 днів тому

    Perplexity forwarded me here :) Thank you for the proof.

  • @angelinausim9863
    @angelinausim9863 17 днів тому

    Geometric process have discrete time and continuous space 01:35

    • @BruneiMathClub
      @BruneiMathClub 17 днів тому

      Right! Thanks for pointing it out. c.f., Wikipedia: en.wikipedia.org/wiki/Geometric_process

  • @ickywitchy4667
    @ickywitchy4667 Місяць тому

    Best video i could find for this topic!

  • @JaafarJoulakMuhammad
    @JaafarJoulakMuhammad 2 місяці тому

    How can I study the Uniform Convergence for the series of function ∑(x/(x^2+1))^k Where x is from R

    • @BruneiMathClub
      @BruneiMathClub 2 місяці тому

      I'm not sure what you mean exactly. Why don't you make a video and let me know when you figure it out?

    • @JaafarJoulakMuhammad
      @JaafarJoulakMuhammad 2 місяці тому

      The intent of my statement is that I want a way to study the Uniform Convergence of the above series of functions

  • @tjerkharkema7378
    @tjerkharkema7378 2 місяці тому

    Thank's a lot for your excellent explanation Dr. Akira, maybe you lost a factor of 2 in the denominator in exp to get the following result: ρ_z(z) = 1⧸√(2π(σ_x^2+σ_y^2)) exp[-(z-(μ_x+μ_y))^2/2(σ_x^2+σ_y^2)] TJ

  • @keanub.1693
    @keanub.1693 2 місяці тому

    King

  • @lebl3278
    @lebl3278 3 місяці тому

    Very good thanks

  • @junma3575
    @junma3575 3 місяці тому

    The P(X) should be P(Xi)*P(Xj) in the variance term, still using P(X) could be a mistake?

    • @BruneiMathClub
      @BruneiMathClub 3 місяці тому

      It is P(X) = P(X1, X2, ..., Xn) (joint probability density), not P(Xi)*P(Xj). Note Xi and Xj may not be independent.

    • @junma3575
      @junma3575 3 місяці тому

      @@BruneiMathClub Thank you so much. I finally get it.

  • @linfengdu7636
    @linfengdu7636 3 місяці тому

    Why is there a 1/2 timed to the covariance constraint? Should the degree of freedom of the covariance matrix be D(D+1)/2?

    • @BruneiMathClub
      @BruneiMathClub 3 місяці тому

      That 1/2 in the covariance constraint is not essential. It's there mostly for an aesthetic reason (it looks nicer after differentiation). You get the same result without the 1/2 factor (try it!), as it can be absorbed in the Lagrange multipliers (γ's).

    • @linfengdu7636
      @linfengdu7636 3 місяці тому

      @@BruneiMathClub Yes indeed. Thank you for your reply and fantastic videos! I’ve been working on the exercise of the Pattern Recognition and Machine Learning book and your videos helped a lot!

    • @linfengdu7636
      @linfengdu7636 3 місяці тому

      @@BruneiMathClub BTW you can also evaluate the stationary point in full matrix form using the trace operator for the quadratic term, which I find is pretty neat.

  • @lefteriseleftheriades7381
    @lefteriseleftheriades7381 3 місяці тому

    And what is it used for?

    • @BruneiMathClub
      @BruneiMathClub 3 місяці тому

      For example, the regression problem can be cast as finding a projection onto a subspace "generated" by a dataset. Future videos will explain such applications.

  • @raul1827
    @raul1827 3 місяці тому

    Can you please tell me what is the referencee to this demonstration?

    • @BruneiMathClub
      @BruneiMathClub 3 місяці тому

      It's in quite a few textbooks. For example, in "Elements of Information Theory" by Cover and Thomas, See also the Wikipedia page: en.wikipedia.org/wiki/Jensen%27s_inequality

    • @raul1827
      @raul1827 3 місяці тому

      @@BruneiMathClub thank's a lot.

  • @cvtncavidan8351
    @cvtncavidan8351 3 місяці тому

    You are grate mannn, thanks, god bless you

    • @BruneiMathClub
      @BruneiMathClub 3 місяці тому

      You're welcome! God bless you, too.

  • @fynnzentner3964
    @fynnzentner3964 4 місяці тому

    Great video! I was looking for a video about uniform convergence of the Fourier Series and your video really helped. Thanks.

  • @LETHERL1VE
    @LETHERL1VE 4 місяці тому

    thank you for accessible explanation!

  • @ethanbottomley-mason8447
    @ethanbottomley-mason8447 4 місяці тому

    It's nice to see someone doing proper math in a short. By proper math, I just mean something beyond basic calc/multivariable calc.

    • @BruneiMathClub
      @BruneiMathClub 4 місяці тому

      Thanks. Shorts can be helpful sometimes.

  • @detranquoc2608
    @detranquoc2608 4 місяці тому

    nice, thanks alot for sharing

  • @minabasil
    @minabasil 4 місяці тому

    very clear ♥ new fan😍

  • @XKALOS7
    @XKALOS7 4 місяці тому

    I Love it

  • @omargaber3122
    @omargaber3122 4 місяці тому

  • @hyperduality2838
    @hyperduality2838 5 місяців тому

    The eigen basis is dual to the standard basis -- conjugacy is dual, spectral decomposition. The integers are self dual as they are their own conjugates. "Always two there are" -- Yoda. Real is dual to imaginary -- complex numbers are dual. Antipodal points identify for the rotation group SO(3) -- stereographic projection.

    • @BruneiMathClub
      @BruneiMathClub 5 місяців тому

      You really love duality!

    • @hyperduality2838
      @hyperduality2838 5 місяців тому

      @@BruneiMathClub Yes, duality means that there is a 4th law of thermodynamics. Anything which is dual to entropy is by definition the 4th law of thermodynamics:- Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics! Teleological physics (syntropy) is dual to non teleological physics (entropy). Syntax is dual to semantics -- languages, communication. If mathematics is a language then it is dual. All observers make predictions to track targets, goals and objectives and this is a syntropic process -- teleological. The Einstein reality criterion:- "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." (Einstein, Podolsky, Rosen 1935, p. 777) Internet Encyclopedia of Philosophy:- www.iep.utm.edu/epr/ According to Einstein reality is predicted into existence -- a syntropic process, teleological. Your brain/mind creates models or predictions of reality hence your mind is syntropic (convergent). Here is a video of some well known physicists talking about duality, watch at 1 hour 4 minutes:- ua-cam.com/video/UjDxk9ZnYJQ/v-deo.html Mathematics is full of dualities (see next comment). Once you accept the 4th law this means that that there is a 5th law of thermodynamics:- Symmetry is dual to conservation -- the duality of Noether's theorem. Duality is a symmetry and it is being conserved according to Noether's theorem. Energy is duality, duality is energy -- the 5th law of thermodynamics! Potential energy is dual to kinetic energy -- gravitational energy is dual.

    • @hyperduality2838
      @hyperduality2838 5 місяців тому

      @@BruneiMathClub Here are some examples of duality in mathematics:- Points are dual to lines -- the principle of duality in geometry. The point duality theorem is dual to the line duality theorem. Homology is dual to co homology -- the word co means mutual and implies duality. Sine is dual to cosine or dual sine -- perpendicularity. Sinh is dual to cosh -- hyperbolic functions. Addition is dual to subtraction (additive inverses) -- abstract algebra. Multiplication is dual to division (multiplicative inverses) -- abstract algebra. Integration (summations, syntropy) is dual to differentiation (differences, entropy). Convergence (syntropy) is dual to divergence (entropy). Injective is dual to surjective synthesizes bijective or isomorphism. The word isomorphism actually means duality. Subgroups are dual to subfields -- the Galois correspondence. Positive is dual to negative -- electric charge or numbers. Positive curvature is dual to negative curvature -- Gauss, Riemann geometry. Curvature or gravitation is dual. "Perpendicularity in hyperbolic geometry is measured in terms of duality" -- Professor Norman J. Wildberger, universal hyperbolic geometry:- ua-cam.com/video/EvP8VtyhzXs/v-deo.html All observers have a syntropic or hyperbolic perspective of reality. The tetrahedron is self dual. The cube is dual to the octahedron. The icosahedron is dual to the dodecahedron. Waves are dual to particles -- quantum duality or pure energy is dual. Symmetric wave functions (Bosons, waves) are dual to anti-symmetric wave functions (Fermions, particles) -- the spin statistics theorem. Bosons are dual to Fermions -- atomic duality. Pure energy is dual and it is being conserved -- the 5th law of thermodynamics!

    • @hyperduality2838
      @hyperduality2838 5 місяців тому

      @@BruneiMathClub Concepts are dual to percepts -- the mind duality of Immanuel Kant. Mathematicians create new ideas or concepts all the time from their perceptions, measurements, observations or intuitions -- they are using duality! The bad news is that Immanuel Kant has been completely ignored for over 200 years and this is why you need new laws of physics! Antinomy (duality) is two truths that contradict each other -- Immanuel Kant. Enantiodromia is the unconscious opposite or opposame (duality) -- Carl Jung.

  • @hyperduality2838
    @hyperduality2838 5 місяців тому

    Projections imply two dual perspectives. Increasing or creating new dimensions or states is an entropic process -- Gram-Schmidt procedure. Decreasing or destroying dimensions or states is a syntropic process. Divergence (entropy) is dual to convergence (syntropy) -- increasing is dual to decreasing. "Always two there are" -- Yoda.

  • @hyperduality2838
    @hyperduality2838 5 місяців тому

    Perpendicularity, orthogonality = Duality! "Perpendicularity in hyperbolic geometry is measured in terms of duality" -- universal hyperbolic geometry. Orthonormality is dual. "Always two there are" -- Yoda. Vectors are dual to co-vectors (forms) -- vectors are dual. Space is dual to time -- Einstein.

  • @hyperduality2838
    @hyperduality2838 5 місяців тому

    Sine is dual to cosine or dual sine -- the word co means mutual and implies duality. "Always two there are" -- Yoda.

  • @BruneiMathClub
    @BruneiMathClub 5 місяців тому

    For this and the coming videos, I use typeset notes instead of handwritten notes for presentation. I'd appreciate it if you let me know which one you prefer.

  • @NaN_000
    @NaN_000 5 місяців тому

    Thank you

  • @NaN_000
    @NaN_000 5 місяців тому

    Is it's f'(x) is signum function right?

    • @BruneiMathClub
      @BruneiMathClub 5 місяців тому

      Almost, but not exactly. They differ at x = 0. For f(x) = |x|, f'(0) = 1, whereas signum(0) = 0.

  • @omargaber3122
    @omargaber3122 5 місяців тому

    Great thanks

  • @juthisarker3789
    @juthisarker3789 5 місяців тому

    Hello sir.. I'm sorry Bangladesh.

  • @johannesaaen248
    @johannesaaen248 5 місяців тому

    I swear I was so god damn confused about the definition of the ball (N_epsilon) set, and how it is used to determine whether a set is open or closed before your video. For some reason our material lacks any visualisation, so you video really really helped me out :)

    • @BruneiMathClub
      @BruneiMathClub 5 місяців тому

      Wow, I'm thrilled to hear that! Thanks, and enjoy your study.

  • @KillerNoam
    @KillerNoam 5 місяців тому

    אני מעדיף אורתוגונליות

  • @theblinkingbrownie4654
    @theblinkingbrownie4654 5 місяців тому

    HUGE!

  • @user-hr8uj4qw4k
    @user-hr8uj4qw4k 6 місяців тому

    There's one issue of mine: By the same argument, the density function of the joint r.v seems to split into product of the marginal pdf of the components, implying that the components are automatically independent, which is obviously wrong. The same question is posted on MSE titled "Conceptual misunderstanding of the multivariate normal distribution", for some reason the comment can't include the original link without being taken down, may I ask what might be the cause to this confusion?

    • @BruneiMathClub
      @BruneiMathClub 6 місяців тому

      Do you mean that although x_i and x_j are not independent, after some transformation, why z_i and z_j seem to be independent? That's because, after the orthogonal transformation, z_i and z_j ARE independent. This is one of the peculiar properties of the multivariate normal distribution: We can always transform it into a joint distribution of independent variables by some orthogonal transformation. By the way, regarding the question on MSE (Math StackExchange), I think that's a different issue, and it has a mistake (the Borel set S should also be transformed along with the variables).

    • @user-hr8uj4qw4k
      @user-hr8uj4qw4k 6 місяців тому

      @@BruneiMathClub Thank you for taking time to respond, I do see where the derivation went wrong now. But may I still ask how to derive the marginal distribution of each component X_i without resorting to the moment generating function? (As most sources do to get the distribution of the linear transformed AX+b) The initial attempt was to simplify the joint distribution by a sequence of change of variables and hopefully to be able to evaluate P(X_i in S_i) = P(X in R x ... x S_i x ... x R), but since this works only when the components are independent, it doesn't seem to apply in the general case.

    • @BruneiMathClub
      @BruneiMathClub 6 місяців тому

      @@user-hr8uj4qw4k I don't have an answer now, but I recommend you try a brute-force calculation for the bivariate case to get a feel for it.

    • @user-hr8uj4qw4k
      @user-hr8uj4qw4k 6 місяців тому

      @@BruneiMathClub Thank you for your advice.

  • @jongxina3595
    @jongxina3595 6 місяців тому

    Very neat derivation!

  • @theblinkingbrownie4654
    @theblinkingbrownie4654 6 місяців тому

    Ty fam!

  • @user-hr8uj4qw4k
    @user-hr8uj4qw4k 6 місяців тому

    It would be nice to include the derivation of some of the basic properties of the multivariate normal distribution. For example, that the expectation of each component r.v is the corresponding component of the mean vector, that the entries of the positive definite real symmetric matrix Sigma actually give covariance, and equivalence between uncorrelatedness and independence, etc.

    • @BruneiMathClub
      @BruneiMathClub 6 місяців тому

      That's exactly what I'm preparing now. Thanks for the suggestion!

  • @theblinkingbrownie4654
    @theblinkingbrownie4654 6 місяців тому

    How would i derive the multivariate normal distribution anyways?

    • @BruneiMathClub
      @BruneiMathClub 6 місяців тому

      That's an interesting question! There are many ways to answer this, but it's basically the same as the univariate case. I will make a video about this in the near future. Stay tuned.

    • @theblinkingbrownie4654
      @theblinkingbrownie4654 6 місяців тому

      @@BruneiMathClub Sure! I figured they would be similar, I just wanted to know where covariance comes into play

  • @nick45be
    @nick45be 6 місяців тому

    In the mean square convergence L², why if there are some x point in which the difference fn - f result not converge in zero so the integral converge to zero ? Maybe because these particular point doesn't give any contribution? But why they don't give any contribution?

    • @BruneiMathClub
      @BruneiMathClub 6 місяців тому

      That's a deep question! I suppose you are referring to the example right after the definition of L2 convergence. Countably many points do not contribute to the integral. In this case, the set of points {x_n = (1/2)^{1/n}} is countable (has the same cardinality as the set of natural numbers). Therefore, they contribute nothing to the integral. You will learn this when you study the theory of Lebesgue integrals (measure theory). Roughly speaking, the "length" of a single point is zero, so the area (contribution to the integral) = height (fn - f)x0 = 0. If you add the lengths of countably many points, it's still zero. For this reason, we say the set {x_n = (1/2)^{1/n}} is a *null set*. That's why they contribute nothing to the integral.

  • @cmdcs1
    @cmdcs1 6 місяців тому

    Looking forward to following this series 😊

  • @dhruvbisht7844
    @dhruvbisht7844 7 місяців тому

    Padane ka tarika thoda casual hai

  • @javierweeb4428
    @javierweeb4428 7 місяців тому

    Very nice demonstration of the problem 🙏🙏🙏

  • @YechenHan
    @YechenHan 7 місяців тому

    老师我们家子涵说她以后都不学随机过程了

  • @thorblessing4015
    @thorblessing4015 7 місяців тому

    Thank you for the help! Best video I have found about this problem!

  • @josephdays07
    @josephdays07 8 місяців тому

    Excellent video. I have demostrated and solved it is in different way. ua-cam.com/video/BX1vgntL3jI/v-deo.htmlsi=dRkM3NxYzbb43zTB Excellent video. When I saw this video I remenber my equation. You can solve this equation. By this methodoloy that I have developed:(1-cos(x))/x=2sen(x/2)*sen(x/2)/(2*sen(x/2))=2sen(x/2); Appliying the limit it is equal 0. ua-cam.com/video/BX1vgntL3jI/v-deo.htmlsi=dRkM3NxYzbb43zTB Excellent video. When I saw this video I remenber my equation. You can solve this equation and integrate. By this methodoloy that I have developed. You can use this identity x=2sin(x/2) for little angles. :x/tan(x)=4*sen(x/2)-2Ln[/sec(x/2)+Tan(x/2)/] +b. ua-cam.com/video/BX1vgntL3jI/v-deo.htmlsi=dRkM3NxYzbb43zTB

  • @jakeaustria5445
    @jakeaustria5445 8 місяців тому

    I wanna use gausian kernel pdf estimation, so I need a multivariate version of it for my use case. I am just not familiar with vectors haha, so it's hard.

    • @BruneiMathClub
      @BruneiMathClub 8 місяців тому

      Yes, the multivariate version looks scary at first. But it becomes easier if you carefully compare and recognize the similarities with and differences from the univariate version. After all, it's the most natural and straightforward generalization.

    • @jakeaustria5445
      @jakeaustria5445 8 місяців тому

      @@BruneiMathClub Is there a great visualisation for the 4d gaussian haha. I know it's very hard or almost impossible to do. Probably just assign the 4th dimension to color and it will work. Just wondering, is there a simpler kernel than the gaussian that produces similar results. I know bandwidth matters a lot in this case, but the paper I read about this topic did not elaborate on the alternative kernels that we can use. There are square kernels, triangular kernel, etc. The paper also said that the shape of kernels don't matter, since the pdf will still be approximated with enough data. The exception is the bandwidth. Different bandwidths result in different limiting functions. How do we compute the bandwidth for the triangular kernel, square kernel, and others.

  • @jakeaustria5445
    @jakeaustria5445 8 місяців тому

    Yey

  • @gaufill
    @gaufill 8 місяців тому

    If you're proven the Hospital rule, it's proven. Nothing is wrong with that.

    • @BruneiMathClub
      @BruneiMathClub 8 місяців тому

      Right. You don't have to worry about it in practice. It just itches.

  • @spinoking9168
    @spinoking9168 8 місяців тому

    Idk, but I like sex

  • @jabenx1041
    @jabenx1041 8 місяців тому

    I read this on a forum, but I don't understand it, but I think it is a simpler way to show that Q is a field "R is a field therefore all you need to show is that addition and multiplication of two rational numbers does not take you out of the rationals, because then Q inherits all the other field properties from R" If only I could know why all those properties are inherited

    • @BruneiMathClub
      @BruneiMathClub 8 місяців тому

      Showing a subset of a field is closed under addition and multiplication is not enough to show the subset is a (sub)field. For example, Z (integers) is a subgroup of R (real numbers). Z is closed under addition and multiplication, yet it is not a field. If you know some group theory, you can see a field (K, +, ×) as two groups (K, +) and (K\{0}, ×), then apply the *subgroup criterion* to each of them. That is, a subset S of a group is a (sub)group if and only if for all a, b ∈ S, ab ∈ S and a^{-1} ∈ S.

  • @marrey7
    @marrey7 9 місяців тому

    Hello! Thanks for the great video. I found it very difficult to prove that p_{n_0 + 1} = n_0*e^{-lambda * n_0 * t}... in the birth process, even if I watched your last video. Can you please help me on that?

    • @BruneiMathClub
      @BruneiMathClub 9 місяців тому

      Thanks! I suppose you mean p_{n_0}(t) = e^{-lambda * n_0 * t}? That is, the left-hand side is p_{n_0}(t), not p_{n_0+1}(t)? For p_{n_0 + 1}(t), we have p_{n_0+1}(t) = n_0*e^{-λ*n_0*t}(1 - e^{-λ*t}). See around 18:25

    • @marrey7
      @marrey7 5 місяців тому

      @@BruneiMathClub Sorry, I meant that I cant solve p_{n_0+1}(t) = n_0*e^{-λ*n_0*t}(1 - e^{-λ*t}). I tried to do in the same way like in poisson process, but the product rule does not apply right in the same way than in poisson process, and I find it impossible to solve.

    • @BruneiMathClub
      @BruneiMathClub 5 місяців тому

      @@marrey7 I see. You are right. The method I showed for the Poisson process doesn't work for the birth process. In this case, you can use a more standard method for solving inhomogeneous linear ODEs: the method of variable parameters. For example, see ua-cam.com/video/wJyJaDwt48w/v-deo.html

    • @marrey7
      @marrey7 5 місяців тому

      @@BruneiMathClub Oh thank you so much