Fast Fading and Ergodic Capacity [Video 10]

Поділитися
Вставка
  • Опубліковано 7 січ 2025

КОМЕНТАРІ •

  • @ahmadbaraya9530
    @ahmadbaraya9530 Рік тому

    Interesting videos. Clear explanations. Thank you.

  • @hatemodet131
    @hatemodet131 4 роки тому

    Thanks a lot for your efforts in publishing such videos, it helps a lot to revise the theory part that a person loses in the market

  • @abdullahimohammad9513
    @abdullahimohammad9513 3 роки тому

    Thank you very much, Prof. You are such an excellent teacher. Your blogs and videos have really helped me in grounding my understanding of complex concepts. I'm at the moment pursuing postgraduate studies at University College London. I hope to collaborate with you one day.

  • @aothatday8152
    @aothatday8152 Рік тому

    Thanks for great video. I have a question: i want calculating ergodic capacity with your formula in 1:36. In this forrmula, SNR is random variable because g is random (example Gaussian distribution). But in next figure in 2:00, SNR is a specific value, i guess this a expect value of random variable SNR but not sure. So my question is: how to calculating ergodic capacity with specific value of SNR.

    • @WirelessFuture
      @WirelessFuture  Рік тому +1

      Yes, we define the SNR as the the average of q|g|^2/N_0: SNR = E{q|g|^2/N_0}. The ergodic capacity is E { log2(1+x) }, where x=q|g|^2/N_0 is exponentially distributed with mean equal to the average SNR. Based on that distribution, one can evaluate the expectation E { log2(1+x) } numerically in two different ways: Evaluate the expectation as an integral or approximate it based on a sample average.

  • @francoisr.927
    @francoisr.927 3 роки тому

    Hi Emil, thanks for the video! Small question: how is the lower bound obtained at 3:56? I guess the upper bound immediately follows from Jensen inequality.

    • @WirelessFuture
      @WirelessFuture  3 роки тому

      It also follows from Jensens inequality, but applied to log2(1+1/x), where x=N0/(q ||g||^2).

    • @francoisr.927
      @francoisr.927 3 роки тому

      @@WirelessFuture Thanks, indeed!

  • @steavenchede9373
    @steavenchede9373 2 роки тому

    Thanks for the video! Could you please recommend one or two books where this video series' concepts are well-summarized?

    • @WirelessFuture
      @WirelessFuture  2 роки тому +1

      The videos are actually produced along with an unpublished compendium that has been used for teaching. Send me an email at emilbjo@kth.se and you can get the PDF.

  • @yasserothman4023
    @yasserothman4023 3 роки тому

    Thanks for the explanation but
    1- at @0:31 why did you mention that the capacity is achieved by sending Large L i.e L-> infinity ?
    2-I mean what is the intuition or the reasoning behind this statement ?
    3-What if L was finite ?

    • @WirelessFuture
      @WirelessFuture  3 роки тому

      Because the capacity is defined as the highest bitrate that one can transmit with and get zero error probability as L->infinity. Please see the noisy-channel coding theorem, en.wikipedia.org/wiki/Noisy-channel_coding_theorem

    • @yasserothman4023
      @yasserothman4023 3 роки тому

      @@WirelessFuture Hi
      @3:40 how was the lower bound derived ? I mean where did the M-1 come from ? as per my knowledge (eq c.3 appendix c in fundamental of massive mimo book) we need to make rv z = 1/ u inside log term this u is 1/norm (g)^2 is an inverse chi squared random variable
      E log2(1+z) > log2(1+1/[ E (1/z) ] ) = log2(1+ 1/[ E (1/norm(g)^2) ] ) and the mean of the inverse chi squared random variable is number of degrees of freedom - 2 thus this should be M-2 instead of M-1
      is that right ?
      Thanks

    • @WirelessFuture
      @WirelessFuture  3 роки тому

      @@yasserothman4023 norm(g)^2 is a (scaled) chi-squared distribution with 2M degrees of freedom. When you subtract 2 from that you get 2(M-1). It all has to do with that the real and imaginary part only contain half the variance of the random variables. If you want to learn more about these things, I recommend Random Matrix Theory and Wireless Communications

  • @kozhenidres314
    @kozhenidres314 4 роки тому

    thank u another great video , and another question is here for you : in 4G , LTE and 5G what is modulation rate or chip rate and why we could not to increase it to any certain level that we want ? to increase data rate which is proportional to it

    • @WirelessFuture
      @WirelessFuture  4 роки тому +1

      The modulation symbol rate is proportional to the bandwidth. It is the sampling theorem that determines how many symbols that can be squeezed into a signal. If you sample more often, you won't get more information. This was mentioned in Video 4: ua-cam.com/video/kP_FhaclHPg/v-deo.html

    • @kozhenidres314
      @kozhenidres314 4 роки тому +2

      @@WirelessFuture i have covered almost all of the wired network (optic) in less than 6 months , but it looks wireless is 150X more complicated , only you and some others UA-cam videos could make it easier , thank you

  • @umerashraf992
    @umerashraf992 4 роки тому

    What is the practical number of antennas?

    • @WirelessFuture
      @WirelessFuture  4 роки тому +1

      In sub-G GHz spectrum, a base station might have up to 64 antennas and a handset 2 antennas. More antennas can be squeezed into the same area at higher frequencies.