At 2:55 , “no-cache” means the response can be stored in caches, but the response must be validated with the origin server before each reuse. It does not mean "don't cache at all."
8:29 how do we get the 0.01 sec queueing delay? And how do we take the average data rate to be L/R. In the sense that if the link capacity is 1.54Mbps, but it needs to be utilised by both the institutional router and the public internet router to load bits onto the link. So does that means that the sums of the rates at which btoh the routers are loading bits onto the link shouldn't exceed 1.54 Mbps? So how do we know that the browsers receive requests the rate at which they send them? Like 15 requests are sent in one second, that is 15*L packets in one second.
I'm confuzzled by all the splitting that's happening in every layer and why it has to happen Currently you can have an IP packet containing a partial TCP packet which contains part of an HTTP/2 packet which contains part of the HTTP Object Why does every layer need to split it's data up, you'd think that one single layer should be responsible for this
Man, this man just write and entire book, updated it to the 8th version, and just give a free explanation!!!
Totally the inspiration I needed in my life.
At 2:55 , “no-cache” means the response can be stored in caches, but the response must be validated with the origin server before each reuse. It does not mean "don't cache at all."
Anyone know if this or an equivalent course is offered on a MOOC platform?
Just to get a certification for completing the material.
8:29 how do we get the 0.01 sec queueing delay? And how do we take the average data rate to be L/R. In the sense that if the link capacity is 1.54Mbps, but it needs to be utilised by both the institutional router and the public internet router to load bits onto the link. So does that means that the sums of the rates at which btoh the routers are loading bits onto the link shouldn't exceed 1.54 Mbps? So how do we know that the browsers receive requests the rate at which they send them? Like 15 requests are sent in one second, that is 15*L packets in one second.
Hello at the 4:50ish mark can you explain how you did your math?
1.50/1.54 and 1.50M/1G
Just amazing
Perfect
Will the web cache server send a head request every time to the origin server to get if it has gotten modified whenever i access that site
Performance:
Web caches,
Conditional GET,
HTTP/2 HTTP/3
I'm confuzzled by all the splitting that's happening in every layer and why it has to happen
Currently you can have an IP packet containing a partial TCP packet which contains part of an HTTP/2 packet which contains part of the HTTP Object
Why does every layer need to split it's data up, you'd think that one single layer should be responsible for this
still wanna know?
WAH DADA WAH
🧠
Poggers Keanu
u r so bad in explaining numericals, btw thanks for the playlist!
He has done an incredible job in explaining the concepts. You should try to find your own knowledge gap, if you don't understand it.
If you cant find and understand percentages or do basic multiplication and addition on your own that sounds like a you problem...
Division and %s are part of high school math