One "Fun" annoyance: You can't use QUIC without a TCP fallback, because a great number of company firewalls will block any port that isn't expressly allowed (as is sensible), and they generally don't allow UDP. Chrome will always open both HTTP3 and HTTP2 connections when trying to reach a host, so if the 3 fails it can fall back to 2 in no time at all.
If the browser suppots HTTP3: The browser first connects to the website via HTTP/2 and if the server advertises it's HTTP/3 port then the browser attempts to switch to HTTP/3; if it works, cool. The browser will use HTTP/3 for future requests too. Failure means there is no UDP response within a time range - that is a few seconds (based on the browser settings). So in this case the initial connection will be really slow. And it's not the server's fault, just a firewall between the server and the client drops UDP packets. (That is why by default the browser first uses HTTP/2.) After the timeout, the browser goes back to HTTP/2 and it will keep using HTTP/2 for future requests until you close the process. (In chrome's network debugger tool the protocol just shows HTTP/2. There is no sign, that it tried HTTP/3, but in the timing window there is a huge connection establish time.)
In the fact, many organization allowing http port 80 / 443 by default, because it is very popular. If QUIC will popular as traditional HTTP, organizations give grants on this firewall. Another problem is proxy server. Additionaly, organizations do some advanced body filtering, using Man in The Middle for crypting, using his own internal certificates. At now there is no way to do it with QUIC. But in the future? Who knows? I know no one proxy server that supports QUIC. But it is no problem. QUIC can be configured on the server in paralel with traditional HTTP.
@@vir2plus IE is a song of the past. But is there any problem with currently not available proxy what can convert QUIC to TCP+UDP ? But what for? Currently IE6 is not available by default. You must do some simple configuration in Edge, and switch page to IE mode. This is used for example for accessing to some old, disupgradeable hardware.
Q: How many software engineers does it take, to send a message from one computer to another? A: Apparently many millions of us, over a period of about 30 years.
HTTP/4: The AI powered server pushes ads and recommended paid content directly to your devices without the slowdown of waiting for you to request them. Efficiently uploads your private information with 0 round trips to ask for permission.
You should EQ filter out some of the low end bass frequencies of your voice (aka high-pass). The voice audio is "boomey" when listening to the video on capable speakers. Great video, love the information and presentation!
The host header was a big deal in http 1.1. Before that you could not host multiple websites on the same ip, so if you were a web host you would only do separate paths for separate sites, or have to give each site their own server. Remember geocities?
I've never understood why HTTP was designed to close the connection after requesting a single object. I do understand compute resources were much more constrained than they are today, so keeping tens of thousands of TCP connections open might be problematic, but even then I figured it was better to open a connection, request all the objects for a page, then at that point maybe terminate the connection.
Because in early versions of HTML, you were unlikely to be required to retrieve that many extra resources in the first place. IMAGES didn't even exist as part of HTML for a few years, and while the image tag got included as part of early HTML drafts around 1993, it wasn't actually part of any official HTML standard until the standard itself was established as HTML 2.0 in 1995. Consider the name: Hyper _TEXT_ Markup Language. It was primarily intended to be a text-based presentation of information, which could all be delivered as one HTML document, needing one HTTP transfer. Hyper _media,_ with so many external resources needed(audio, video, images, scripts, etc), didn't really come around until later, and didn't become a gigantic focus until much later. EDIT: Also, HTTP 1.1, which first introduced the ability to use a single connection to handle multiple requests, came out only a year after HTTP 1.0 did, in 1997. Once the need for requesting multiple resources became apparent, it was added to the standard fairly quickly.
@@Acorn_Anomaly Thank you, that does shed some light on things. I got on the internet in the early 90's and most pages had images by that time, so I might have been a little late to the party. =)
@@K9Megahertz @Acorn_anomoly has explained it very well. I just want to add one minor thing. The focus on text is evidenced not just from Hyper Text Markup Language, but also the name of the protocol itself: Hyper Text Transfer Protocol. It was all supposed to be about just text.
@@K9MegahertzAdding on to the lack of multimedia thing, I think it's important to note how HTML didn't natively support any form of multimedia content besides images until HTML 5, which had its first draft release in 2008, and wasn't finalized as a standard until _2014._ That's right, it was only _ten years ago_ that HTML gained native support for audio and video. Anything before then relied on external plugins and special browser support(like Adobe Flash or Java applets, or ActiveX objects). It was _not_ a part of HTML. (HTML _did_ have a tag for specifying external content, "applet", and later "object", but how they were used was basically up to browsers and plugins. The tag just pointed to the content, it was up to the browser and plugins what to do with it.)
When talking about QUIC integrating with TLS 1.3 the narrator actually says "TCP 1.3" which is not correct 😅 This is a great summary though, I love the visuals
@@MrHirenP It says in the text: Adobe Illustrator and After Effects. I would also like to learn this, but there's NO way I would use an Adobe product. I have to search now and see what the open-source alternative would be.
@@andycivil Blender should be able to do all that, supports scripting, and has a very good video editor and compositor built in. The video editor is often overlooked.
I still use HTTP/1.1 for smaller IOT devices because it's easier to implement with limited RAM and CPU power. It supports SSL encryption and chunk encoding. For Webservers I mostly use HTTP/2 or HTTP/3 with fallback to HTTP/2
HTTP 1.1 is text based protocol, each header is separated by CRLF "carriage return and line feed" symbols like in Windows text files, in programming languages written like " " in string literal. After the last header you need to add additional CRLF to define the start of body of http request or response, the length of data in a body is expected to be set by "Content-Length" header. Example: POST /api/endpoint HTTP/1.1 *CRLF* Content-Type: application/json *CRLF* Content-Length: 34 *CRLF* *CRLF* { "someJsonKey": "someJsonValue" }
1:20 actually we didn't use TLS back in the times of HTTP1.0 because it used too much of the precious CPU cycles. Unless you had to do something really security critical thing you just opted for the cheap solution. Also there wasn't much to be stolen on the internet back then :)
Very good explanation. It seems wrong that we still call it HTTP. All these new features and protocol changes are about everything other than the transferring of hypertext.
In the future you'll just get a streaming video feed with no client side logic. You won't need to worry about security because it all runs through the NSA trunk line into Google. The next step is to download an AI into your NPU and you won't even need to receive content because the AI will generate it on the fly, along with recommendations for a cold, delicious Coca-Cola product.
This protocol was dumb from the very first second of its idea. Text request, then BINARY reply with picture... WTF?!?! W3C old m0r0ns cannot make anything properly!
It's also hard to implement from server side. Basically they found better way to just insert things in data-URI and it much more easier and faster than doing push.
Sadly QUIC doesn't bring real world performance increases when it comes to throughput, since its more network and CPU intensive than TCP+kTLS. (larger packets which are more complex to parse, more context switching between kernel- and user-space, larger overhead in parsing) In a pure throughput benchmark, TCP+kTLS will always win, QUIC is roughly half as fast in that area. QUIC is also slower than using multi path tcp + kTLS. but both QUIC and multi path TCP are blocked at most middle boxes you see so you have to use HTTP2 for the next 10-12 years at least regardless.
ugh I thought I was going to get into a sick video with code examples on how http sits over tcp or whatever... I'm too old for this abstract shit. I need to put eyes on browser and compiler (since I'm at it) code. It's incredible how the first protocols where developed all by Standford and DARPA or whatever, now it's some guy at google. Sick videos for professional level expert knowledge don't do well on youtube, and I understand it. I'm not in the mood for them half of the times myself.
@@PeterVerhas Its not that its secure because of encryption or something. Its because GET requests are cached, they also stay in your browser history. So if you submit a GET login form with your user and password, that will get cached and saved in your history, the length is very limited. POST however, its not cached, the data is not visible because is in the body not in the header, no length limit, supports different data types such as booleans. Also, the security is more of a safeguard more for the devs/websites rather than the user. Because we made the difference that GET just gets a resource, while POST alters or does something on the server.
As someone who has had to configure a web content filter, I despise websockets. They are an ugly hack. An accident of history that exists solely as a workaround for... everything. 1. Design TCP. 2. Design HTTP. 3. HTTP is amazing! 4. Firewall admins block everything other than HTTP because they don't know what all the rest is, and it might be a security concern. 5. But now how do we do real-time chat and games and and low-latency interactions? 6. Hear me out... What about... TCP over HTTP over TCP!
Websockets have little to nothing to do with HTTP. They are essentially just a mechanism to turn a HTTP connection into a plain TCP again. This is another great case of corporate firewalls breaking shit for everyone. WS are in a way much more like FTP.
@@joergsonnenberger6836 Indeed they are, and that is their awkwardness: The only reason they exist is as workaround, to deal with the problem of so many firewalls blocking everything that isn't http(s) for security and content filtering reasons. It's just part a back-and-forth between firewall admins trying to keep out unrecognised traffic and application developers trying to make their unrecognised traffic get through.
Honestly if the Handshake is more then just SYN SYN/ACK but instead like "Hello" "Ahh Hello, cert, Initial, fin, and two books to read" this looks like a Potential for UDP amplification attacks. I would rather stick with TCP thanks. And to be honest most TCP connections take no longer then 10-20ms. If you need 100ms just for your Handshakes there is something wrong with your server.
the move to UDP was motivated by the pain of trying to edit TCP in any way. You can just send UDP packets and not do any assembly of them in the kernel; just have the applications do it. And in order to prevent firewalls from messing with it, it's inside of TLS.
It's a little bit sillier even than that. The move to UDP, specifically, was because there can never be another protocol at transport layer now. We have TCP, UDP and ICMP - and never may there be another, because we have address translation and firewalls all over the place now. Not that people haven't tried - multipath TCP, SCTP, UDP-lite. They all hit the same problem: Unless the firewalls and address translation all along the route are configured to handle them, they don't work, and that just isn't realistic in most cases. So we are eternally stuck with only TCP and UDP.
@@vylbird8014 the one who designs UDP is a forward thinker. Even with TCP and UDP are cemented, people can just use UDP when designing new application layer protocol.
@@vylbird8014 there is no reason that we need TCP to be in the kernel, given that UDP exists. make everything UDP sent to userland, and userland has TCP as a library. network accelerators basically hand off an entire ethernet card to a userspace program now; to keep it from bothering the kernel. it's like a 10x speedup.
There is no "HTTP/1". Chunked encoding allows using persistent connections with dynamically computed requests where the size is not known in advance. You could always send content immediately with HTTP/1.0, it was never required to compute Content-Length. HTTP/2 still has head-of-line blocking problems, just not with dynamic content. The funny part of HTTP/3 is that it has been proven to be often slower than HTTP/2.
@@zazethe6553 HTTP/3 is using the good, old UDP (which belongs to the TCP/IP protocol suite) but in disguise, under the name of a QUIC protocol. Which is yet another prothesis atop an old protocol. Aside from that, QUIC being a sublayer of HTTP/3, is an application protocol. By that, it violates the layer model, by providing transport layer protocol functionality within the application layer protocol.
1. No. Onion only transports TCP, not UDP. 2. It is now. Originally it was controlled by google, but they took it to the IETF and had them formalise it as a standard.
This explanation is very good, unfortunately it has very disinformative. HTTP 1 we know, it has serial protocol steps, and close this one after transfer file. SSL/TLS renegotiation takes a lot of time. But it is not a wall, making multiple connections from the client to the server. Adventage with HTTP 1.1 (not .1.1) is only keeping TCP connection with SSL/TLS layer for second req. Of course, client can open multiple connections if it is needed with the same way than HTTP 1. Additional convenience is chunked transfer, specialy designed for transfering very big file, and for transfering streamed data, for example video on youtube. Another protocol is HTTP/2 . This is NOT HTTP 2, but this is HTTP/2. Adventage is framing, and transport multiple requests and ansvers between client on single TCP chanel secured by SSL/TLS. With comparation to HTTP 1.1, any blocking request will not block the rest of this one. In HTTP/2 this is transfered in parallel on single TCP connection. In oposition to HTTP 1.1 , all requests are serial processed, and as some workaround, client will open additional TCP connections. Finally HTTP3, this is completly different to all before. Protocol QUIC is not strict replacement for TCP+SSL. QUIC does not use TCP, this uses UDP. In that situation on starting speed does not decide TCP stack, but decides QUIC. Internally features of QUIC is parallel processed framed transfer data, similar to HTTP/2, wipe out slowly accelerated TCP, and his own cryptography. All in one. But HTTP3 is very complicated, and not available on all platforms. I'm not sure, how long time QUIC can keep idle connection between client and server, and it is able reconnect with the same connection ID (similar to SSH), keeping identyfication without workaround like cookies.
HTTP/1.1 Head-of-line: is it correct that if a request is delaying, all next request are waiting? As I read, if a request is delaying, just a new request has to wait.
1997 sounds dreamy. Having no HTTP Status Codes would eliminate countless hours of pointless navel-gazing debate. It's funny (in a tragic way) how StackOverflow questions about HTTP Status Codes often have multiple directly conflicting incompatible answers each with hundreds of upvotes. Whatever your interpretation of HTTP Status Codes is, it's wrong.
HTTP status codes are important in many scenarios. For example, cases where you browser will prompt you for credentials. Or for cache validation responses or redirects. If you send the wrong response code, the browser will not handle it properly. Yeah there are cases where it isn't clear or meaningful on which should be used when but that doesn't mean they aren't valuable.
@@username7763 1) You're talking about errors closer to the "protocol" level, and HTTP Status Codes at that level make sense especially when you consider that the "P" in HTTP stands for "Protocol". 2) The problem with HTTP Status Codes is when it comes to "application" level errors. The 400 Bad Request code gets argued over ad nauseam. It's so tiresome watching other developers fight over this topic.
@@DemPilafian HTTP status code is necessary. How do you know the request is OK before parsing the payload without it? Many HTTP clients check the status code even before touching any of its contents. This is especially true for many REST clients. These clients only proceed if the status code met their expectations. People can debate all they want, it doesn't reduce the importance of HTTP status codes.
QUICK ist not a protocol or something brand new or invention. It just reliy on good networks. Old good UDP did it decades ago. If network is bad QUICK sucks.
Why does the http (an application layer) need to care about the physical connection (switching from cellular to WiFi for instance) - should that not be handled lower in the network stack?
the QUIC connection ID is not network level, it's an high level abstraction that could be thought of as some sort of "session ID". When the network stack switches IP addresses, for exemple by losing Wifi signal and switching to 5G, the lower parts of the network stack establish a completely new connection but the application layer uses the connection ID to continue the data transmission as nothing had changed (almost). So, yes, it's an application layer abstraction that allows for better decoupling with the underying network stack.
You need to reestablish connection when the underlying physical connection changes. Because, very likely, in this case, your IP also changes. If you're in the middle of loading during this or in constant connection, it'll likely be disconnected because now the server identifies you as a different client. Having an application layer connection ID means the server can still identify the same client connection even when the underlying address changes.
Usually yes, but the experience can be disruptive. Files stop downloading and have to be resumed, chat sessions drop, calls freeze, and the user is left annoyed while everything re-establishes from scratch - which can take a frustratingly long time, tens of seconds. Handling it at transport layer means the application layer can carry on mostly unaffected. Not entirely, but having your video call stutter and glitch for a few seconds is a lot better than disconnecting and having to call back. And yes, video calls often run over HTTPS. Yes, this is dumb. But it has to be that way, because if the customer is on an office or school network the firewall is almost certain to block everything by default - https is the only protocol you can be sure will actually get through all the time. This is why we invented Fucking Websockets, the bane of the web-filter admin's life.
@@vylbird8014 isn't most web video calls using WebRTC? Which itself uses SCTP for the underlying transport. Also, most real-time mobile applications nowadays are quite good at handling network changes. I've played real-time games on mobile, and it doesn't have a problem switching from wifi to cellular and vice-versa.
@@maighstir3003 "The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability. SCTP is standardized by the Internet Engineering Task Force (IETF) in RFC 9260. The SCTP reference implementation was released as part of FreeBSD version 7, and has since been widely ported to other platforms." Wikipedia
A General Design Flaw of the Internet is that there is no secure Packet handling. You either have insecure Packets (UDP) or Secure Streams (TCP). In Many Situations, you want Packets, that are resend over a statefull connection, but tolerate a random order.
It is not a design flaw. Tolerating random orders but needing guaranteed delivery are contradicting. In other words, it is not a precise requirement. How long can the app wait for a package to arrive and process other packages sent later? How can it signal that a package has not arrived if it is not a stream? If you have some specially tuned needs between the TCP-implemented stream and the UDP-implemented package, then you need a particular application-level protocol implemented using UDP. That is exactly HTTP/3.
@@AncapDudeCris, you must revisit this. Http 2 is implemented by the browsers for years and cluent code does not change at all, 100% backward compatible. So if you are worried about that the clients will not work: don’t. Also, the server will revert back to the old version when the client cannot handle higher than 1.1 As on the server side: you just have to install the new version of the server, and your application will just work because the http api is the same on the server side as well. If your app does not sork on the new version of the server (nginx, Apache or some other server) it means they are very old, outdated and are probably missing also secirity patches.
I like the graphics. Please work on the enunciation. "Unlike" needs K, "side" needs D, "assets" needs T, "lets" needs T,... Those issues made the video hard to follow purely by acoustics, not content.
I doubt support for at least 1.1 will ever go away. Honestly, if all you've got is a simple, static web page, HTTP 1.1 is all you need. The features available in 2.0+ are really more intended for sites that require a more active, constant connection.
You are using the new version without realizing it. The browsers support it without telling you. The servers need only version upgrades. The apps do not need to change. Programmers are using new versions without knowing it.
The thing i learned recently is that: We know that our devices connected to WIFI share the same external ip address but have local ip addresses, but i didn't know that we also share the same external ip address with other ISP customers, they do that because of limited number of IPs in ipv4. The technique is called *Carrier-grade NAT*
If only explanations could win Oscars… Thank u for delivering such high quality content!!!
The thumbnail was so complete and beautiful that i could not press the video and automatically press the like button.
One "Fun" annoyance: You can't use QUIC without a TCP fallback, because a great number of company firewalls will block any port that isn't expressly allowed (as is sensible), and they generally don't allow UDP. Chrome will always open both HTTP3 and HTTP2 connections when trying to reach a host, so if the 3 fails it can fall back to 2 in no time at all.
Yeah, because QUICK is just UDP no more no less.
the new IE6 compatibility
If the browser suppots HTTP3: The browser first connects to the website via HTTP/2 and if the server advertises it's HTTP/3 port then the browser attempts to switch to HTTP/3; if it works, cool. The browser will use HTTP/3 for future requests too.
Failure means there is no UDP response within a time range - that is a few seconds (based on the browser settings). So in this case the initial connection will be really slow. And it's not the server's fault, just a firewall between the server and the client drops UDP packets. (That is why by default the browser first uses HTTP/2.) After the timeout, the browser goes back to HTTP/2 and it will keep using HTTP/2 for future requests until you close the process. (In chrome's network debugger tool the protocol just shows HTTP/2. There is no sign, that it tried HTTP/3, but in the timing window there is a huge connection establish time.)
In the fact, many organization allowing http port 80 / 443 by default, because it is very popular. If QUIC will popular as traditional HTTP, organizations give grants on this firewall. Another problem is proxy server. Additionaly, organizations do some advanced body filtering, using Man in The Middle for crypting, using his own internal certificates. At now there is no way to do it with QUIC. But in the future? Who knows? I know no one proxy server that supports QUIC. But it is no problem. QUIC can be configured on the server in paralel with traditional HTTP.
@@vir2plus IE is a song of the past. But is there any problem with currently not available proxy what can convert QUIC to TCP+UDP ? But what for? Currently IE6 is not available by default. You must do some simple configuration in Edge, and switch page to IE mode. This is used for example for accessing to some old, disupgradeable hardware.
great way of explaining how the web works
This video is so packed, if there were a quiz at the end of it I'd definitely fail at it, more than once :D
Q: How many software engineers does it take, to send a message from one computer to another?
A: Apparently many millions of us, over a period of about 30 years.
One. But it's the most reinvented wheel in human history.
The first message ever sent on ARPANET was "lo". It was supposed to be "login" but the still-experimental router crashed after two characters.
HTTP/4: The AI powered server pushes ads and recommended paid content directly to your devices without the slowdown of waiting for you to request them. Efficiently uploads your private information with 0 round trips to ask for permission.
What a backwards idea, with neuralink you can have ads directly in your brain! No need to look at a screen or even open your eyes.
Me: I understand HTTP fairly well
ByteByteGo: nope
You should EQ filter out some of the low end bass frequencies of your voice (aka high-pass). The voice audio is "boomey" when listening to the video on capable speakers. Great video, love the information and presentation!
The host header was a big deal in http 1.1. Before that you could not host multiple websites on the same ip, so if you were a web host you would only do separate paths for separate sites, or have to give each site their own server. Remember geocities?
Yeah and they reintroduced the problem with SSL. It's funny how every new generation repeats the problem of the old one.
I've never understood why HTTP was designed to close the connection after requesting a single object. I do understand compute resources were much more constrained than they are today, so keeping tens of thousands of TCP connections open might be problematic, but even then I figured it was better to open a connection, request all the objects for a page, then at that point maybe terminate the connection.
Because in early versions of HTML, you were unlikely to be required to retrieve that many extra resources in the first place.
IMAGES didn't even exist as part of HTML for a few years, and while the image tag got included as part of early HTML drafts around 1993, it wasn't actually part of any official HTML standard until the standard itself was established as HTML 2.0 in 1995.
Consider the name: Hyper _TEXT_ Markup Language. It was primarily intended to be a text-based presentation of information, which could all be delivered as one HTML document, needing one HTTP transfer.
Hyper _media,_ with so many external resources needed(audio, video, images, scripts, etc), didn't really come around until later, and didn't become a gigantic focus until much later.
EDIT: Also, HTTP 1.1, which first introduced the ability to use a single connection to handle multiple requests, came out only a year after HTTP 1.0 did, in 1997. Once the need for requesting multiple resources became apparent, it was added to the standard fairly quickly.
@@Acorn_Anomaly Thank you, that does shed some light on things. I got on the internet in the early 90's and most pages had images by that time, so I might have been a little late to the party. =)
@@K9Megahertz @Acorn_anomoly has explained it very well. I just want to add one minor thing.
The focus on text is evidenced not just from Hyper Text Markup Language, but also the name of the protocol itself: Hyper Text Transfer Protocol. It was all supposed to be about just text.
🎉
@@K9MegahertzAdding on to the lack of multimedia thing, I think it's important to note how HTML didn't natively support any form of multimedia content besides images until HTML 5, which had its first draft release in 2008, and wasn't finalized as a standard until _2014._
That's right, it was only _ten years ago_ that HTML gained native support for audio and video.
Anything before then relied on external plugins and special browser support(like Adobe Flash or Java applets, or ActiveX objects). It was _not_ a part of HTML.
(HTML _did_ have a tag for specifying external content, "applet", and later "object", but how they were used was basically up to browsers and plugins. The tag just pointed to the content, it was up to the browser and plugins what to do with it.)
When talking about QUIC integrating with TLS 1.3 the narrator actually says "TCP 1.3" which is not correct 😅
This is a great summary though, I love the visuals
These animations are insane
Do you know how it was done? I’d like to learn this
@@MrHirenP It says in the text: Adobe Illustrator and After Effects. I would also like to learn this, but there's NO way I would use an Adobe product. I have to search now and see what the open-source alternative would be.
@@andycivil Blender should be able to do all that, supports scripting, and has a very good video editor and compositor built in. The video editor is often overlooked.
the visual aid goes kinda crazy with it
Simple way of getting complex information. It is a perfect content. Subscribed. Well done guys !!!
Best explanation protocol I ever found. Thank you !
I still use HTTP/1.1 for smaller IOT devices because it's easier to implement with limited RAM and CPU power. It supports SSL encryption and chunk encoding.
For Webservers I mostly use HTTP/2 or HTTP/3 with fallback to HTTP/2
HTTP 1.1 is text based protocol, each header is separated by CRLF "carriage return and line feed" symbols like in Windows text files, in programming languages written like "
" in string literal.
After the last header you need to add additional CRLF to define the start of body of http request or response, the length of data in a body is expected to be set by "Content-Length" header. Example:
POST /api/endpoint HTTP/1.1 *CRLF*
Content-Type: application/json *CRLF*
Content-Length: 34 *CRLF*
*CRLF*
{ "someJsonKey": "someJsonValue" }
Thankyou for delivering such a quality content. Learned and noted a lot.
At the beginning you promised this would be fascinating. You'll be hearing from my lawyer.
1:20 actually we didn't use TLS back in the times of HTTP1.0 because it used too much of the precious CPU cycles. Unless you had to do something really security critical thing you just opted for the cheap solution. Also there wasn't much to be stolen on the internet back then :)
And TLS was called SSL :)
And SSL was not free(price) to use
@@tribela Not on the general internet anyway. There were no free cert providers.
SSL 2.0 actually predates HTTP/1.1 by three years.
Very good explanation. It seems wrong that we still call it HTTP. All these new features and protocol changes are about everything other than the transferring of hypertext.
In the future you'll just get a streaming video feed with no client side logic. You won't need to worry about security because it all runs through the NSA trunk line into Google. The next step is to download an AI into your NPU and you won't even need to receive content because the AI will generate it on the fly, along with recommendations for a cold, delicious Coca-Cola product.
This protocol was dumb from the very first second of its idea. Text request, then BINARY reply with picture... WTF?!?! W3C old m0r0ns cannot make anything properly!
- Thx.
- Well done: clear/concise; and informative. Excellent graphics, too.
- Keep up the great content...
Thank u, Great explanation as always 👏🏻
Thank you for this historical review!
There's an explosion of information, and I need to pause and think in a lot of places.
he says 60% of internet is using http 2, LOL
Because we can't process visual text and spoken different text at the same time. So we have to pause to read what's on the screen.
QUIC breaks content inspection at the f/w and local endpoint security solutions. Many orgs just block it outright until security catches up.
FYI: server push from HTTP 2 is no longer supported and many browsers
AND MANY BROWSERS WHATT
@@lucaxtshotting2378 in many browsers. Sorry
@@lucaxtshotting2378 we might never know 😭
In @@lucaxtshotting2378
It's also hard to implement from server side.
Basically they found better way to just insert things in data-URI and it much more easier and faster than doing push.
Quic explanation on http 3 awesome ❤❤❤
thanks for the knowledge . great presentation and easy to absorb lesson ❤
Awesome video .... great visualizations.
Nice illustrations. Truly a picture is worth a thousand words. I think the like button is broken due to how is mashed it 😅.
Sadly QUIC doesn't bring real world performance increases when it comes to throughput, since its more network and CPU intensive than TCP+kTLS. (larger packets which are more complex to parse, more context switching between kernel- and user-space, larger overhead in parsing)
In a pure throughput benchmark, TCP+kTLS will always win, QUIC is roughly half as fast in that area.
QUIC is also slower than using multi path tcp + kTLS.
but both QUIC and multi path TCP are blocked at most middle boxes you see so you have to use HTTP2 for the next 10-12 years at least regardless.
AMAZING EXPLANATION THANK YOU SO MUCH
great video, thanks for sharing
There is a mismatch in the thumbnail for the diagrams of HTTP/1.1 and HTTP/2. I think it should be swapped.
Did you click on it, watch the video, and/or engage in the comments? If yes; working working as intended.
@@-feltThe purpose was the topic not the mismatch. I also recognized it…
Thanks for the vid but it would be fair to point the downsides of HTTP 3.0 like you did with the previous versions of this protocol.
neat. appreciate your animations :)
Excellent content as always!!
ugh I thought I was going to get into a sick video with code examples on how http sits over tcp or whatever... I'm too old for this abstract shit.
I need to put eyes on browser and compiler (since I'm at it) code.
It's incredible how the first protocols where developed all by Standford and DARPA or whatever, now it's some guy at google.
Sick videos for professional level expert knowledge don't do well on youtube, and I understand it. I'm not in the mood for them half of the times myself.
thank you! I enjoyed a lot this video!
Thank you for doing this!
To be clear: POST was common before 1996. How else would you submit forms? It was only standardized in 1996.
You are correct with your statement, but the reasoning is wrong. You can send form data in the request URL using GET.
@@PeterVerhas Yes, you can send form data using GET, but I'm sure the url length was limited so it was discouraged.
simply by a GET but we do use POST to avoid sending the all data through URL cuz it's not secure
@@2pacgamer only if you can tell me why it is not secure. It travels in the same tcp channel.
@@PeterVerhas Its not that its secure because of encryption or something. Its because GET requests are cached, they also stay in your browser history. So if you submit a GET login form with your user and password, that will get cached and saved in your history, the length is very limited. POST however, its not cached, the data is not visible because is in the body not in the header, no length limit, supports different data types such as booleans. Also, the security is more of a safeguard more for the devs/websites rather than the user. Because we made the difference that GET just gets a resource, while POST alters or does something on the server.
Nice overview. Thanks.
Excellent 👌
Suitable for high quality content and 3d objects manipulating transmission.
Very nice, I really liked it :)
Websockets are an important feature of HTTP2, that got little attention in this video, imho
As someone who has had to configure a web content filter, I despise websockets. They are an ugly hack. An accident of history that exists solely as a workaround for... everything.
1. Design TCP.
2. Design HTTP.
3. HTTP is amazing!
4. Firewall admins block everything other than HTTP because they don't know what all the rest is, and it might be a security concern.
5. But now how do we do real-time chat and games and and low-latency interactions?
6. Hear me out... What about... TCP over HTTP over TCP!
Websockets have little to nothing to do with HTTP. They are essentially just a mechanism to turn a HTTP connection into a plain TCP again. This is another great case of corporate firewalls breaking shit for everyone. WS are in a way much more like FTP.
@@joergsonnenberger6836 Indeed they are, and that is their awkwardness: The only reason they exist is as workaround, to deal with the problem of so many firewalls blocking everything that isn't http(s) for security and content filtering reasons. It's just part a back-and-forth between firewall admins trying to keep out unrecognised traffic and application developers trying to make their unrecognised traffic get through.
Excellent video Thanks!!
Well done.
Well done!
Honestly if the Handshake is more then just SYN SYN/ACK but instead like "Hello" "Ahh Hello, cert, Initial, fin, and two books to read" this looks like a Potential for UDP amplification attacks. I would rather stick with TCP thanks. And to be honest most TCP connections take no longer then 10-20ms. If you need 100ms just for your Handshakes there is something wrong with your server.
the move to UDP was motivated by the pain of trying to edit TCP in any way. You can just send UDP packets and not do any assembly of them in the kernel; just have the applications do it. And in order to prevent firewalls from messing with it, it's inside of TLS.
It's a little bit sillier even than that. The move to UDP, specifically, was because there can never be another protocol at transport layer now. We have TCP, UDP and ICMP - and never may there be another, because we have address translation and firewalls all over the place now. Not that people haven't tried - multipath TCP, SCTP, UDP-lite. They all hit the same problem: Unless the firewalls and address translation all along the route are configured to handle them, they don't work, and that just isn't realistic in most cases. So we are eternally stuck with only TCP and UDP.
@@vylbird8014 the one who designs UDP is a forward thinker. Even with TCP and UDP are cemented, people can just use UDP when designing new application layer protocol.
@@vylbird8014 there is no reason that we need TCP to be in the kernel, given that UDP exists. make everything UDP sent to userland, and userland has TCP as a library. network accelerators basically hand off an entire ethernet card to a userspace program now; to keep it from bothering the kernel. it's like a 10x speedup.
There is no "HTTP/1". Chunked encoding allows using persistent connections with dynamically computed requests where the size is not known in advance. You could always send content immediately with HTTP/1.0, it was never required to compute Content-Length. HTTP/2 still has head-of-line blocking problems, just not with dynamic content. The funny part of HTTP/3 is that it has been proven to be often slower than HTTP/2.
bro your visuals are flawless, but please buy a new mic and do some sound design with your voice over
Sadly, we tend to forget that somewhere deep, underneath those modern layers, a good old TCP/IP still exists.
No, under http/3 there's no TCP as explained here
@@zazethe6553 HTTP/3 is using the good, old UDP (which belongs to the TCP/IP protocol suite) but in disguise, under the name of a QUIC protocol. Which is yet another prothesis atop an old protocol. Aside from that, QUIC being a sublayer of HTTP/3, is an application protocol. By that, it violates the layer model, by providing transport layer protocol functionality within the application layer protocol.
thanks you for sharing
Sir it will be better if you can provide the images of those illustrations fir saving
Great video and animations. But pretty fast. I doubt someone could follow the explanations if he/she doesn't already know most of it.
Thanks ❤🎉
Techncial SEO ❤
thanks
Very good conntent
I'm pretty sure at the time of HTTP/1, TLS didn't yet exists.
Полезная подсказка, помогло.
wow thanks!
really nice !
you're the best
Is QUIC compatible with onion protocol?
Is QUIC a public standard or are there legal limitations on how it can be used related to ownership?
1. No. Onion only transports TCP, not UDP.
2. It is now. Originally it was controlled by google, but they took it to the IETF and had them formalise it as a standard.
@@vylbird8014 Thanks for the info.
You rock!
How to harden the HTTP/3-next protocol for a bank's server by example?
3:20 isnt that "domain sharing?"
damn these animations has got sth to say !
thx bro!
i'm just curious, which tools do you use to make these animation?
good video 🙏
hi what tools did you make to do the animation? thx
This explanation is very good, unfortunately it has very disinformative. HTTP 1 we know, it has serial protocol steps, and close this one after transfer file. SSL/TLS renegotiation takes a lot of time. But it is not a wall, making multiple connections from the client to the server. Adventage with HTTP 1.1 (not .1.1) is only keeping TCP connection with SSL/TLS layer for second req. Of course, client can open multiple connections if it is needed with the same way than HTTP 1. Additional convenience is chunked transfer, specialy designed for transfering very big file, and for transfering streamed data, for example video on youtube.
Another protocol is HTTP/2 . This is NOT HTTP 2, but this is HTTP/2. Adventage is framing, and transport multiple requests and ansvers between client on single TCP chanel secured by SSL/TLS. With comparation to HTTP 1.1, any blocking request will not block the rest of this one. In HTTP/2 this is transfered in parallel on single TCP connection. In oposition to HTTP 1.1 , all requests are serial processed, and as some workaround, client will open additional TCP connections.
Finally HTTP3, this is completly different to all before. Protocol QUIC is not strict replacement for TCP+SSL. QUIC does not use TCP, this uses UDP. In that situation on starting speed does not decide TCP stack, but decides QUIC. Internally features of QUIC is parallel processed framed transfer data, similar to HTTP/2, wipe out slowly accelerated TCP, and his own cryptography. All in one. But HTTP3 is very complicated, and not available on all platforms.
I'm not sure, how long time QUIC can keep idle connection between client and server, and it is able reconnect with the same connection ID (similar to SSH), keeping identyfication without workaround like cookies.
which midware webhost/language stack supports http3?
HTTP/1.1 Head-of-line: is it correct that if a request is delaying, all next request are waiting? As I read, if a request is delaying, just a new request has to wait.
How do I ensure that my Firefox goes to page with http3/quic ?
Dejon Shores
Present sir
1997 sounds dreamy. Having no HTTP Status Codes would eliminate countless hours of pointless navel-gazing debate. It's funny (in a tragic way) how StackOverflow questions about HTTP Status Codes often have multiple directly conflicting incompatible answers each with hundreds of upvotes. Whatever your interpretation of HTTP Status Codes is, it's wrong.
HTTP status codes are important in many scenarios. For example, cases where you browser will prompt you for credentials. Or for cache validation responses or redirects. If you send the wrong response code, the browser will not handle it properly. Yeah there are cases where it isn't clear or meaningful on which should be used when but that doesn't mean they aren't valuable.
@@username7763 1) You're talking about errors closer to the "protocol" level, and HTTP Status Codes at that level make sense especially when you consider that the "P" in HTTP stands for "Protocol".
2) The problem with HTTP Status Codes is when it comes to "application" level errors. The 400 Bad Request code gets argued over ad nauseam. It's so tiresome watching other developers fight over this topic.
@@DemPilafianbro "кто на работу ... ложил, тот до пенсии дожил"
@@DemPilafian HTTP status code is necessary. How do you know the request is OK before parsing the payload without it? Many HTTP clients check the status code even before touching any of its contents. This is especially true for many REST clients. These clients only proceed if the status code met their expectations. People can debate all they want, it doesn't reduce the importance of HTTP status codes.
@@bltzcstrnx That's like saying if you don't like Microsoft you obviously despise computers and want to go back to stone tablets.
На байбите всегда курс меньше?
QUICK ist not a protocol or something brand new or invention. It just reliy on good networks. Old good UDP did it decades ago. If network is bad QUICK sucks.
QUIC is Google's NIH of TCP :)
Why does the http (an application layer) need to care about the physical connection (switching from cellular to WiFi for instance) - should that not be handled lower in the network stack?
the QUIC connection ID is not network level, it's an high level abstraction that could be thought of as some sort of "session ID". When the network stack switches IP addresses, for exemple by losing Wifi signal and switching to 5G, the lower parts of the network stack establish a completely new connection but the application layer uses the connection ID to continue the data transmission as nothing had changed (almost). So, yes, it's an application layer abstraction that allows for better decoupling with the underying network stack.
You need to reestablish connection when the underlying physical connection changes. Because, very likely, in this case, your IP also changes. If you're in the middle of loading during this or in constant connection, it'll likely be disconnected because now the server identifies you as a different client. Having an application layer connection ID means the server can still identify the same client connection even when the underlying address changes.
@@bltzcstrnx ah, that's true, I'd forgotten that even the MAC address changes because it's different hardware
Usually yes, but the experience can be disruptive. Files stop downloading and have to be resumed, chat sessions drop, calls freeze, and the user is left annoyed while everything re-establishes from scratch - which can take a frustratingly long time, tens of seconds. Handling it at transport layer means the application layer can carry on mostly unaffected. Not entirely, but having your video call stutter and glitch for a few seconds is a lot better than disconnecting and having to call back.
And yes, video calls often run over HTTPS. Yes, this is dumb. But it has to be that way, because if the customer is on an office or school network the firewall is almost certain to block everything by default - https is the only protocol you can be sure will actually get through all the time. This is why we invented Fucking Websockets, the bane of the web-filter admin's life.
@@vylbird8014 isn't most web video calls using WebRTC? Which itself uses SCTP for the underlying transport. Also, most real-time mobile applications nowadays are quite good at handling network changes. I've played real-time games on mobile, and it doesn't have a problem switching from wifi to cellular and vice-versa.
I can't remember any more
Use SCTP? Will develop own protocol!
Status Code Transfer Protocol?
@@maighstir3003 "The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability.
SCTP is standardized by the Internet Engineering Task Force (IETF) in RFC 9260. The SCTP reference implementation was released as part of FreeBSD version 7, and has since been widely ported to other platforms." Wikipedia
@@maighstir3003 Stream Control Transmission Protocol RFC 9260
Adams Crossroad
Weimann Locks
liked it
A General Design Flaw of the Internet is that there is no secure Packet handling. You either have insecure Packets (UDP) or Secure Streams (TCP).
In Many Situations, you want Packets, that are resend over a statefull connection, but tolerate a random order.
It is not a design flaw. Tolerating random orders but needing guaranteed delivery are contradicting. In other words, it is not a precise requirement. How long can the app wait for a package to arrive and process other packages sent later? How can it signal that a package has not arrived if it is not a stream?
If you have some specially tuned needs between the TCP-implemented stream and the UDP-implemented package, then you need a particular application-level protocol implemented using UDP. That is exactly HTTP/3.
👍 👍 👍 👍
cure
Still use 1.1 on all my servers as on 2.0 many websites and applications don't work.
"many websites and applications don't work on 2.0" - you
such as? lol http acts the same as 1.1 for 99% of things
@@gg-gn3re It was as I said. On my last try the majority of sites were slower and some didn't work at all.
@@AncapDude such as?
@@gg-gn3re WordPress Sites, Shops, other PHP Web apps.
@@AncapDudeCris, you must revisit this. Http 2 is implemented by the browsers for years and cluent code does not change at all, 100% backward compatible. So if you are worried about that the clients will not work: don’t. Also, the server will revert back to the old version when the client cannot handle higher than 1.1
As on the server side: you just have to install the new version of the server, and your application will just work because the http api is the same on the server side as well.
If your app does not sork on the new version of the server (nginx, Apache or some other server) it means they are very old, outdated and are probably missing also secirity patches.
Why the intro feels like AI generated
Currently it's not possible to use QUIC protocol for a next.js project, right? 🤔
I like the graphics.
Please work on the enunciation. "Unlike" needs K, "side" needs D, "assets" needs T, "lets" needs T,... Those issues made the video hard to follow purely by acoustics, not content.
HTTP/3 IDs = easier tracking.
Remember, HTTP/2 and HTTP/3 were pushed primarily by the ad industry.
🙂👍🏻
HTTP2 & 3 is like IPv6, i'll probably stick to the old versions for as long as possible.
At least now i know what differences there are between them.
There’s really no reason to postpone using HTTP 2.0
I doubt support for at least 1.1 will ever go away.
Honestly, if all you've got is a simple, static web page, HTTP 1.1 is all you need.
The features available in 2.0+ are really more intended for sites that require a more active, constant connection.
You are using the new version without realizing it. The browsers support it without telling you. The servers need only version upgrades. The apps do not need to change. Programmers are using new versions without knowing it.
HTTP2 is already the most popular protocol for some time. The share of HTTP2 + HTTP3 is ~70% of internet traffic.
The thing i learned recently is that:
We know that our devices connected to WIFI share the same external ip address but have local ip addresses, but i didn't know that we also share the same external ip address with other ISP customers, they do that because of limited number of IPs in ipv4. The technique is called *Carrier-grade NAT*