Deriving the multivariate normal distribution from the maximum entropy principle

Поділитися
Вставка
  • Опубліковано 30 січ 2024
  • Just like the univariate normal distribution, we can derive the multivariate normal distribution from the maximum entropy principle. But in this case, we need to specify the whole covariance matrix (not just variances).
    - For the univariate version, see • Maximum entropy and th...
    - For the basic properties of multivariate Gaussian integrals, see • Multivariate Gaussian ...

КОМЕНТАРІ • 9

  • @theblinkingbrownie4654
    @theblinkingbrownie4654 5 місяців тому

    HUGE!

  • @linfengdu7636
    @linfengdu7636 3 місяці тому

    Why is there a 1/2 timed to the covariance constraint? Should the degree of freedom of the covariance matrix be D(D+1)/2?

    • @BruneiMathClub
      @BruneiMathClub  3 місяці тому

      That 1/2 in the covariance constraint is not essential. It's there mostly for an aesthetic reason (it looks nicer after differentiation). You get the same result without the 1/2 factor (try it!), as it can be absorbed in the Lagrange multipliers (γ's).

    • @linfengdu7636
      @linfengdu7636 3 місяці тому +1

      @@BruneiMathClub Yes indeed. Thank you for your reply and fantastic videos! I’ve been working on the exercise of the Pattern Recognition and Machine Learning book and your videos helped a lot!

    • @linfengdu7636
      @linfengdu7636 3 місяці тому

      @@BruneiMathClub BTW you can also evaluate the stationary point in full matrix form using the trace operator for the quadratic term, which I find is pretty neat.

  • @junma3575
    @junma3575 3 місяці тому

    The P(X) should be P(Xi)*P(Xj) in the variance term, still using P(X) could be a mistake?

    • @BruneiMathClub
      @BruneiMathClub  3 місяці тому +1

      It is P(X) = P(X1, X2, ..., Xn) (joint probability density), not P(Xi)*P(Xj). Note Xi and Xj may not be independent.

    • @junma3575
      @junma3575 3 місяці тому

      @@BruneiMathClub Thank you so much. I finally get it.