One "Fun" annoyance: You can't use QUIC without a TCP fallback, because a great number of company firewalls will block any port that isn't expressly allowed (as is sensible), and they generally don't allow UDP. Chrome will always open both HTTP3 and HTTP2 connections when trying to reach a host, so if the 3 fails it can fall back to 2 in no time at all.
If the browser suppots HTTP3: The browser first connects to the website via HTTP/2 and if the server advertises it's HTTP/3 port then the browser attempts to switch to HTTP/3; if it works, cool. The browser will use HTTP/3 for future requests too. Failure means there is no UDP response within a time range - that is a few seconds (based on the browser settings). So in this case the initial connection will be really slow. And it's not the server's fault, just a firewall between the server and the client drops UDP packets. (That is why by default the browser first uses HTTP/2.) After the timeout, the browser goes back to HTTP/2 and it will keep using HTTP/2 for future requests until you close the process. (In chrome's network debugger tool the protocol just shows HTTP/2. There is no sign, that it tried HTTP/3, but in the timing window there is a huge connection establish time.)
HTTP/4: The AI powered server pushes ads and recommended paid content directly to your devices without the slowdown of waiting for you to request them. Efficiently uploads your private information with 0 round trips to ask for permission.
Q: How many software engineers does it take, to send a message from one computer to another? A: Apparently many millions of us, over a period of about 30 years.
You should EQ filter out some of the low end bass frequencies of your voice (aka high-pass). The voice audio is "boomey" when listening to the video on capable speakers. Great video, love the information and presentation!
The host header was a big deal in http 1.1. Before that you could not host multiple websites on the same ip, so if you were a web host you would only do separate paths for separate sites, or have to give each site their own server. Remember geocities?
@@MrHirenP It says in the text: Adobe Illustrator and After Effects. I would also like to learn this, but there's NO way I would use an Adobe product. I have to search now and see what the open-source alternative would be.
@@andycivil Blender should be able to do all that, supports scripting, and has a very good video editor and compositor built in. The video editor is often overlooked.
When talking about QUIC integrating with TLS 1.3 the narrator actually says "TCP 1.3" which is not correct 😅 This is a great summary though, I love the visuals
I've never understood why HTTP was designed to close the connection after requesting a single object. I do understand compute resources were much more constrained than they are today, so keeping tens of thousands of TCP connections open might be problematic, but even then I figured it was better to open a connection, request all the objects for a page, then at that point maybe terminate the connection.
Because in early versions of HTML, you were unlikely to be required to retrieve that many extra resources in the first place. IMAGES didn't even exist as part of HTML for a few years, and while the image tag got included as part of early HTML drafts around 1993, it wasn't actually part of any official HTML standard until the standard itself was established as HTML 2.0 in 1995. Consider the name: Hyper _TEXT_ Markup Language. It was primarily intended to be a text-based presentation of information, which could all be delivered as one HTML document, needing one HTTP transfer. Hyper _media,_ with so many external resources needed(audio, video, images, scripts, etc), didn't really come around until later, and didn't become a gigantic focus until much later. EDIT: Also, HTTP 1.1, which first introduced the ability to use a single connection to handle multiple requests, came out only a year after HTTP 1.0 did, in 1997. Once the need for requesting multiple resources became apparent, it was added to the standard fairly quickly.
@@Acorn_Anomaly Thank you, that does shed some light on things. I got on the internet in the early 90's and most pages had images by that time, so I might have been a little late to the party. =)
@@K9Megahertz @Acorn_anomoly has explained it very well. I just want to add one minor thing. The focus on text is evidenced not just from Hyper Text Markup Language, but also the name of the protocol itself: Hyper Text Transfer Protocol. It was all supposed to be about just text.
@@K9MegahertzAdding on to the lack of multimedia thing, I think it's important to note how HTML didn't natively support any form of multimedia content besides images until HTML 5, which had its first draft release in 2008, and wasn't finalized as a standard until _2014._ That's right, it was only _ten years ago_ that HTML gained native support for audio and video. Anything before then relied on external plugins and special browser support(like Adobe Flash or Java applets, or ActiveX objects). It was _not_ a part of HTML. (HTML _did_ have a tag for specifying external content, "applet", and later "object", but how they were used was basically up to browsers and plugins. The tag just pointed to the content, it was up to the browser and plugins what to do with it.)
1:20 actually we didn't use TLS back in the times of HTTP1.0 because it used too much of the precious CPU cycles. Unless you had to do something really security critical thing you just opted for the cheap solution. Also there wasn't much to be stolen on the internet back then :)
Very good explanation. It seems wrong that we still call it HTTP. All these new features and protocol changes are about everything other than the transferring of hypertext.
In the future you'll just get a streaming video feed with no client side logic. You won't need to worry about security because it all runs through the NSA trunk line into Google. The next step is to download an AI into your NPU and you won't even need to receive content because the AI will generate it on the fly, along with recommendations for a cold, delicious Coca-Cola product.
This protocol was dumb from the very first second of its idea. Text request, then BINARY reply with picture... WTF?!?! W3C old m0r0ns cannot make anything properly!
HTTP 1.1 is text based protocol, each header is separated by CRLF "carriage return and line feed" symbols like in Windows text files, in programming languages written like " " in string literal. After the last header you need to add additional CRLF to define the start of body of http request or response, the length of data in a body is expected to be set by "Content-Length" header. Example: POST /api/endpoint HTTP/1.1 *CRLF* Content-Type: application/json *CRLF* Content-Length: 34 *CRLF* *CRLF* { "someJsonKey": "someJsonValue" }
It's also hard to implement from server side. Basically they found better way to just insert things in data-URI and it much more easier and faster than doing push.
1. No. Onion only transports TCP, not UDP. 2. It is now. Originally it was controlled by google, but they took it to the IETF and had them formalise it as a standard.
There is no "HTTP/1". Chunked encoding allows using persistent connections with dynamically computed requests where the size is not known in advance. You could always send content immediately with HTTP/1.0, it was never required to compute Content-Length. HTTP/2 still has head-of-line blocking problems, just not with dynamic content. The funny part of HTTP/3 is that it has been proven to be often slower than HTTP/2.
ugh I thought I was going to get into a sick video with code examples on how http sits over tcp or whatever... I'm too old for this abstract shit. I need to put eyes on browser and compiler (since I'm at it) code. It's incredible how the first protocols where developed all by Standford and DARPA or whatever, now it's some guy at google. Sick videos for professional level expert knowledge don't do well on youtube, and I understand it. I'm not in the mood for them half of the times myself.
@@PeterVerhas Its not that its secure because of encryption or something. Its because GET requests are cached, they also stay in your browser history. So if you submit a GET login form with your user and password, that will get cached and saved in your history, the length is very limited. POST however, its not cached, the data is not visible because is in the body not in the header, no length limit, supports different data types such as booleans. Also, the security is more of a safeguard more for the devs/websites rather than the user. Because we made the difference that GET just gets a resource, while POST alters or does something on the server.
HTTP/1.1 Head-of-line: is it correct that if a request is delaying, all next request are waiting? As I read, if a request is delaying, just a new request has to wait.
As someone who has had to configure a web content filter, I despise websockets. They are an ugly hack. An accident of history that exists solely as a workaround for... everything. 1. Design TCP. 2. Design HTTP. 3. HTTP is amazing! 4. Firewall admins block everything other than HTTP because they don't know what all the rest is, and it might be a security concern. 5. But now how do we do real-time chat and games and and low-latency interactions? 6. Hear me out... What about... TCP over HTTP over TCP!
Websockets have little to nothing to do with HTTP. They are essentially just a mechanism to turn a HTTP connection into a plain TCP again. This is another great case of corporate firewalls breaking shit for everyone. WS are in a way much more like FTP.
@@joergsonnenberger6836 Indeed they are, and that is their awkwardness: The only reason they exist is as workaround, to deal with the problem of so many firewalls blocking everything that isn't http(s) for security and content filtering reasons. It's just part a back-and-forth between firewall admins trying to keep out unrecognised traffic and application developers trying to make their unrecognised traffic get through.
@@zazethe6553 HTTP/3 is using the good, old UDP (which belongs to the TCP/IP protocol suite) but in disguise, under the name of a QUIC protocol. Which is yet another prothesis atop an old protocol. Aside from that, QUIC being a sublayer of HTTP/3, is an application protocol. By that, it violates the layer model, by providing transport layer protocol functionality within the application layer protocol.
Why does the http (an application layer) need to care about the physical connection (switching from cellular to WiFi for instance) - should that not be handled lower in the network stack?
the QUIC connection ID is not network level, it's an high level abstraction that could be thought of as some sort of "session ID". When the network stack switches IP addresses, for exemple by losing Wifi signal and switching to 5G, the lower parts of the network stack establish a completely new connection but the application layer uses the connection ID to continue the data transmission as nothing had changed (almost). So, yes, it's an application layer abstraction that allows for better decoupling with the underying network stack.
You need to reestablish connection when the underlying physical connection changes. Because, very likely, in this case, your IP also changes. If you're in the middle of loading during this or in constant connection, it'll likely be disconnected because now the server identifies you as a different client. Having an application layer connection ID means the server can still identify the same client connection even when the underlying address changes.
Usually yes, but the experience can be disruptive. Files stop downloading and have to be resumed, chat sessions drop, calls freeze, and the user is left annoyed while everything re-establishes from scratch - which can take a frustratingly long time, tens of seconds. Handling it at transport layer means the application layer can carry on mostly unaffected. Not entirely, but having your video call stutter and glitch for a few seconds is a lot better than disconnecting and having to call back. And yes, video calls often run over HTTPS. Yes, this is dumb. But it has to be that way, because if the customer is on an office or school network the firewall is almost certain to block everything by default - https is the only protocol you can be sure will actually get through all the time. This is why we invented Fucking Websockets, the bane of the web-filter admin's life.
@@vylbird8014 isn't most web video calls using WebRTC? Which itself uses SCTP for the underlying transport. Also, most real-time mobile applications nowadays are quite good at handling network changes. I've played real-time games on mobile, and it doesn't have a problem switching from wifi to cellular and vice-versa.
the move to UDP was motivated by the pain of trying to edit TCP in any way. You can just send UDP packets and not do any assembly of them in the kernel; just have the applications do it. And in order to prevent firewalls from messing with it, it's inside of TLS.
It's a little bit sillier even than that. The move to UDP, specifically, was because there can never be another protocol at transport layer now. We have TCP, UDP and ICMP - and never may there be another, because we have address translation and firewalls all over the place now. Not that people haven't tried - multipath TCP, SCTP, UDP-lite. They all hit the same problem: Unless the firewalls and address translation all along the route are configured to handle them, they don't work, and that just isn't realistic in most cases. So we are eternally stuck with only TCP and UDP.
@@vylbird8014 the one who designs UDP is a forward thinker. Even with TCP and UDP are cemented, people can just use UDP when designing new application layer protocol.
@@vylbird8014 there is no reason that we need TCP to be in the kernel, given that UDP exists. make everything UDP sent to userland, and userland has TCP as a library. network accelerators basically hand off an entire ethernet card to a userspace program now; to keep it from bothering the kernel. it's like a 10x speedup.
1997 sounds dreamy. Having no HTTP Status Codes would eliminate countless hours of pointless navel-gazing debate. It's funny (in a tragic way) how StackOverflow questions about HTTP Status Codes often have multiple directly conflicting incompatible answers each with hundreds of upvotes. Whatever your interpretation of HTTP Status Codes is, it's wrong.
HTTP status codes are important in many scenarios. For example, cases where you browser will prompt you for credentials. Or for cache validation responses or redirects. If you send the wrong response code, the browser will not handle it properly. Yeah there are cases where it isn't clear or meaningful on which should be used when but that doesn't mean they aren't valuable.
@@username7763 1) You're talking about errors closer to the "protocol" level, and HTTP Status Codes at that level make sense especially when you consider that the "P" in HTTP stands for "Protocol". 2) The problem with HTTP Status Codes is when it comes to "application" level errors. The 400 Bad Request code gets argued over ad nauseam. It's so tiresome watching other developers fight over this topic.
@@DemPilafian HTTP status code is necessary. How do you know the request is OK before parsing the payload without it? Many HTTP clients check the status code even before touching any of its contents. This is especially true for many REST clients. These clients only proceed if the status code met their expectations. People can debate all they want, it doesn't reduce the importance of HTTP status codes.
QUICK ist not a protocol or something brand new or invention. It just reliy on good networks. Old good UDP did it decades ago. If network is bad QUICK sucks.
@@maighstir3003 "The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability. SCTP is standardized by the Internet Engineering Task Force (IETF) in RFC 9260. The SCTP reference implementation was released as part of FreeBSD version 7, and has since been widely ported to other platforms." Wikipedia
I like the graphics. Please work on the enunciation. "Unlike" needs K, "side" needs D, "assets" needs T, "lets" needs T,... Those issues made the video hard to follow purely by acoustics, not content.
A General Design Flaw of the Internet is that there is no secure Packet handling. You either have insecure Packets (UDP) or Secure Streams (TCP). In Many Situations, you want Packets, that are resend over a statefull connection, but tolerate a random order.
It is not a design flaw. Tolerating random orders but needing guaranteed delivery are contradicting. In other words, it is not a precise requirement. How long can the app wait for a package to arrive and process other packages sent later? How can it signal that a package has not arrived if it is not a stream? If you have some specially tuned needs between the TCP-implemented stream and the UDP-implemented package, then you need a particular application-level protocol implemented using UDP. That is exactly HTTP/3.
@@AncapDudeCris, you must revisit this. Http 2 is implemented by the browsers for years and cluent code does not change at all, 100% backward compatible. So if you are worried about that the clients will not work: don’t. Also, the server will revert back to the old version when the client cannot handle higher than 1.1 As on the server side: you just have to install the new version of the server, and your application will just work because the http api is the same on the server side as well. If your app does not sork on the new version of the server (nginx, Apache or some other server) it means they are very old, outdated and are probably missing also secirity patches.
@bytebytego I'm confused in few things. What is protocol? I mean was it algorithm and someone converted it into code or what it is? How it works under the hood
A protocol is a set of agreements. So if both the server and browser use these agreements, everything should work fine. It doesn’t say anything about how it should be implemented or which code to use. A protocol might refer to algorithms like BASE64, but how you implement this is up to the developer.
@@EdwinMartinAfter saying that, BASE64 is not an algorithm. It is a representation format, a kind of protocol per se, how to represent binary data using a limited character set. How my code works, what it does, converting binary data to a character array and back is the algorithm.
It's not the algorithm itself. It's a description of procedure, just like in any formal process between two parties - what to say, when to say it, which documents to expect and when. The technical version of a business process. Except that computers are the most inflexible and stubborn of bureaucrats, so the protocol needs to define every last aspect in exacting and unambiguous detail.
@@bltzcstrnx I’m not surprised. Everything about it feels amateurish, like it definitely was not done by a professional software engineer for scalable production purposes.
@@mensaswede4028 You sound very arrogant. Have you considered that Tim Berners-Lee had different requirements in 1990/1991 and scalability was not even a buzzword yet? If you want to compare it to the work of professional software engineers of the time, maybe take a look at CORBA, which was invented at about the same time. Frankly, I take the simplicity (and naivety) of HTTP/0.9 any day.
@@mensaswede4028 That doesn't make it a bad design. In fact, much of the complexity in modern versions are a direct result of badly designed systems. Open a random web page and you will see hundreds of requests to dozens of domains. Most of that can be avoided by a well designed web page. It's just that most people don't care. Ads are even worse. That's most of the justification for the complexity of HTTP/2. Now look at each request. How many cookies does a typical website need? All that junk is unnecessary in a well designed system, but that's not in the interest of Google. Instead we optimize the protocols to have somewhat more efficient encodings of junk.
I doubt support for at least 1.1 will ever go away. Honestly, if all you've got is a simple, static web page, HTTP 1.1 is all you need. The features available in 2.0+ are really more intended for sites that require a more active, constant connection.
You are using the new version without realizing it. The browsers support it without telling you. The servers need only version upgrades. The apps do not need to change. Programmers are using new versions without knowing it.
The thing i learned recently is that: We know that our devices connected to WIFI share the same external ip address but have local ip addresses, but i didn't know that we also share the same external ip address with other ISP customers, they do that because of limited number of IPs in ipv4. The technique is called *Carrier-grade NAT*
If only explanations could win Oscars… Thank u for delivering such high quality content!!!
One "Fun" annoyance: You can't use QUIC without a TCP fallback, because a great number of company firewalls will block any port that isn't expressly allowed (as is sensible), and they generally don't allow UDP. Chrome will always open both HTTP3 and HTTP2 connections when trying to reach a host, so if the 3 fails it can fall back to 2 in no time at all.
Yeah, because QUICK is just UDP no more no less.
the new IE6 compatibility
If the browser suppots HTTP3: The browser first connects to the website via HTTP/2 and if the server advertises it's HTTP/3 port then the browser attempts to switch to HTTP/3; if it works, cool. The browser will use HTTP/3 for future requests too.
Failure means there is no UDP response within a time range - that is a few seconds (based on the browser settings). So in this case the initial connection will be really slow. And it's not the server's fault, just a firewall between the server and the client drops UDP packets. (That is why by default the browser first uses HTTP/2.) After the timeout, the browser goes back to HTTP/2 and it will keep using HTTP/2 for future requests until you close the process. (In chrome's network debugger tool the protocol just shows HTTP/2. There is no sign, that it tried HTTP/3, but in the timing window there is a huge connection establish time.)
The thumbnail was so complete and beautiful that i could not press the video and automatically press the like button.
HTTP/4: The AI powered server pushes ads and recommended paid content directly to your devices without the slowdown of waiting for you to request them. Efficiently uploads your private information with 0 round trips to ask for permission.
What a backwards idea, with neuralink you can have ads directly in your brain! No need to look at a screen or even open your eyes.
Q: How many software engineers does it take, to send a message from one computer to another?
A: Apparently many millions of us, over a period of about 30 years.
One. But it's the most reinvented wheel in human history.
The first message ever sent on ARPANET was "lo". It was supposed to be "login" but the still-experimental router crashed after two characters.
This video is so packed, if there were a quiz at the end of it I'd definitely fail at it, more than once :D
Me: I understand HTTP fairly well
ByteByteGo: nope
great way of explaining how the web works
You should EQ filter out some of the low end bass frequencies of your voice (aka high-pass). The voice audio is "boomey" when listening to the video on capable speakers. Great video, love the information and presentation!
The host header was a big deal in http 1.1. Before that you could not host multiple websites on the same ip, so if you were a web host you would only do separate paths for separate sites, or have to give each site their own server. Remember geocities?
Yeah and they reintroduced the problem with SSL. It's funny how every new generation repeats the problem of the old one.
Simple way of getting complex information. It is a perfect content. Subscribed. Well done guys !!!
These animations are insane
Do you know how it was done? I’d like to learn this
@@MrHirenP It says in the text: Adobe Illustrator and After Effects. I would also like to learn this, but there's NO way I would use an Adobe product. I have to search now and see what the open-source alternative would be.
@@andycivil Blender should be able to do all that, supports scripting, and has a very good video editor and compositor built in. The video editor is often overlooked.
When talking about QUIC integrating with TLS 1.3 the narrator actually says "TCP 1.3" which is not correct 😅
This is a great summary though, I love the visuals
I've never understood why HTTP was designed to close the connection after requesting a single object. I do understand compute resources were much more constrained than they are today, so keeping tens of thousands of TCP connections open might be problematic, but even then I figured it was better to open a connection, request all the objects for a page, then at that point maybe terminate the connection.
Because in early versions of HTML, you were unlikely to be required to retrieve that many extra resources in the first place.
IMAGES didn't even exist as part of HTML for a few years, and while the image tag got included as part of early HTML drafts around 1993, it wasn't actually part of any official HTML standard until the standard itself was established as HTML 2.0 in 1995.
Consider the name: Hyper _TEXT_ Markup Language. It was primarily intended to be a text-based presentation of information, which could all be delivered as one HTML document, needing one HTTP transfer.
Hyper _media,_ with so many external resources needed(audio, video, images, scripts, etc), didn't really come around until later, and didn't become a gigantic focus until much later.
EDIT: Also, HTTP 1.1, which first introduced the ability to use a single connection to handle multiple requests, came out only a year after HTTP 1.0 did, in 1997. Once the need for requesting multiple resources became apparent, it was added to the standard fairly quickly.
@@Acorn_Anomaly Thank you, that does shed some light on things. I got on the internet in the early 90's and most pages had images by that time, so I might have been a little late to the party. =)
@@K9Megahertz @Acorn_anomoly has explained it very well. I just want to add one minor thing.
The focus on text is evidenced not just from Hyper Text Markup Language, but also the name of the protocol itself: Hyper Text Transfer Protocol. It was all supposed to be about just text.
🎉
@@K9MegahertzAdding on to the lack of multimedia thing, I think it's important to note how HTML didn't natively support any form of multimedia content besides images until HTML 5, which had its first draft release in 2008, and wasn't finalized as a standard until _2014._
That's right, it was only _ten years ago_ that HTML gained native support for audio and video.
Anything before then relied on external plugins and special browser support(like Adobe Flash or Java applets, or ActiveX objects). It was _not_ a part of HTML.
(HTML _did_ have a tag for specifying external content, "applet", and later "object", but how they were used was basically up to browsers and plugins. The tag just pointed to the content, it was up to the browser and plugins what to do with it.)
1:20 actually we didn't use TLS back in the times of HTTP1.0 because it used too much of the precious CPU cycles. Unless you had to do something really security critical thing you just opted for the cheap solution. Also there wasn't much to be stolen on the internet back then :)
And TLS was called SSL :)
And SSL was not free(price) to use
@@tribela Not on the general internet anyway. There were no free cert providers.
SSL 2.0 actually predates HTTP/1.1 by three years.
Thank you for this historical review!
- Thx.
- Well done: clear/concise; and informative. Excellent graphics, too.
- Keep up the great content...
QUIC breaks content inspection at the f/w and local endpoint security solutions. Many orgs just block it outright until security catches up.
Very good explanation. It seems wrong that we still call it HTTP. All these new features and protocol changes are about everything other than the transferring of hypertext.
In the future you'll just get a streaming video feed with no client side logic. You won't need to worry about security because it all runs through the NSA trunk line into Google. The next step is to download an AI into your NPU and you won't even need to receive content because the AI will generate it on the fly, along with recommendations for a cold, delicious Coca-Cola product.
This protocol was dumb from the very first second of its idea. Text request, then BINARY reply with picture... WTF?!?! W3C old m0r0ns cannot make anything properly!
Thanks for the vid but it would be fair to point the downsides of HTTP 3.0 like you did with the previous versions of this protocol.
i'm just curious, which tools do you use to make these animation?
HTTP 1.1 is text based protocol, each header is separated by CRLF "carriage return and line feed" symbols like in Windows text files, in programming languages written like "
" in string literal.
After the last header you need to add additional CRLF to define the start of body of http request or response, the length of data in a body is expected to be set by "Content-Length" header. Example:
POST /api/endpoint HTTP/1.1 *CRLF*
Content-Type: application/json *CRLF*
Content-Length: 34 *CRLF*
*CRLF*
{ "someJsonKey": "someJsonValue" }
There is a mismatch in the thumbnail for the diagrams of HTTP/1.1 and HTTP/2. I think it should be swapped.
Did you click on it, watch the video, and/or engage in the comments? If yes; working working as intended.
@@-feltThe purpose was the topic not the mismatch. I also recognized it…
Awesome video .... great visualizations.
FYI: server push from HTTP 2 is no longer supported and many browsers
AND MANY BROWSERS WHATT
@@lucaxtshotting2378 in many browsers. Sorry
@@lucaxtshotting2378 we might never know 😭
In @@lucaxtshotting2378
It's also hard to implement from server side.
Basically they found better way to just insert things in data-URI and it much more easier and faster than doing push.
thanks for the knowledge . great presentation and easy to absorb lesson ❤
Thank u, Great explanation as always 👏🏻
the visual aid goes kinda crazy with it
Is QUIC compatible with onion protocol?
Is QUIC a public standard or are there legal limitations on how it can be used related to ownership?
1. No. Onion only transports TCP, not UDP.
2. It is now. Originally it was controlled by google, but they took it to the IETF and had them formalise it as a standard.
@@vylbird8014 Thanks for the info.
You rock!
There's an explosion of information, and I need to pause and think in a lot of places.
he says 60% of internet is using http 2, LOL
Because we can't process visual text and spoken different text at the same time. So we have to pause to read what's on the screen.
There is no "HTTP/1". Chunked encoding allows using persistent connections with dynamically computed requests where the size is not known in advance. You could always send content immediately with HTTP/1.0, it was never required to compute Content-Length. HTTP/2 still has head-of-line blocking problems, just not with dynamic content. The funny part of HTTP/3 is that it has been proven to be often slower than HTTP/2.
which midware webhost/language stack supports http3?
AMAZING EXPLANATION THANK YOU SO MUCH
neat. appreciate your animations :)
ugh I thought I was going to get into a sick video with code examples on how http sits over tcp or whatever... I'm too old for this abstract shit.
I need to put eyes on browser and compiler (since I'm at it) code.
It's incredible how the first protocols where developed all by Standford and DARPA or whatever, now it's some guy at google.
Sick videos for professional level expert knowledge don't do well on youtube, and I understand it. I'm not in the mood for them half of the times myself.
Nice overview. Thanks.
hi what tools did you make to do the animation? thx
Thank you for doing this!
To be clear: POST was common before 1996. How else would you submit forms? It was only standardized in 1996.
You are correct with your statement, but the reasoning is wrong. You can send form data in the request URL using GET.
@@PeterVerhas Yes, you can send form data using GET, but I'm sure the url length was limited so it was discouraged.
simply by a GET but we do use POST to avoid sending the all data through URL cuz it's not secure
@@2pacgamer only if you can tell me why it is not secure. It travels in the same tcp channel.
@@PeterVerhas Its not that its secure because of encryption or something. Its because GET requests are cached, they also stay in your browser history. So if you submit a GET login form with your user and password, that will get cached and saved in your history, the length is very limited. POST however, its not cached, the data is not visible because is in the body not in the header, no length limit, supports different data types such as booleans. Also, the security is more of a safeguard more for the devs/websites rather than the user. Because we made the difference that GET just gets a resource, while POST alters or does something on the server.
Excellent content as always!!
Suitable for high quality content and 3d objects manipulating transmission.
great video, thanks for sharing
Nice illustrations. Truly a picture is worth a thousand words. I think the like button is broken due to how is mashed it 😅.
How do I ensure that my Firefox goes to page with http3/quic ?
HTTP/1.1 Head-of-line: is it correct that if a request is delaying, all next request are waiting? As I read, if a request is delaying, just a new request has to wait.
thank you! I enjoyed a lot this video!
I'm pretty sure at the time of HTTP/1, TLS didn't yet exists.
Well done!
Websockets are an important feature of HTTP2, that got little attention in this video, imho
As someone who has had to configure a web content filter, I despise websockets. They are an ugly hack. An accident of history that exists solely as a workaround for... everything.
1. Design TCP.
2. Design HTTP.
3. HTTP is amazing!
4. Firewall admins block everything other than HTTP because they don't know what all the rest is, and it might be a security concern.
5. But now how do we do real-time chat and games and and low-latency interactions?
6. Hear me out... What about... TCP over HTTP over TCP!
Websockets have little to nothing to do with HTTP. They are essentially just a mechanism to turn a HTTP connection into a plain TCP again. This is another great case of corporate firewalls breaking shit for everyone. WS are in a way much more like FTP.
@@joergsonnenberger6836 Indeed they are, and that is their awkwardness: The only reason they exist is as workaround, to deal with the problem of so many firewalls blocking everything that isn't http(s) for security and content filtering reasons. It's just part a back-and-forth between firewall admins trying to keep out unrecognised traffic and application developers trying to make their unrecognised traffic get through.
Excellent video Thanks!!
Well done.
Very nice, I really liked it :)
Excellent 👌
Sadly, we tend to forget that somewhere deep, underneath those modern layers, a good old TCP/IP still exists.
No, under http/3 there's no TCP as explained here
@@zazethe6553 HTTP/3 is using the good, old UDP (which belongs to the TCP/IP protocol suite) but in disguise, under the name of a QUIC protocol. Which is yet another prothesis atop an old protocol. Aside from that, QUIC being a sublayer of HTTP/3, is an application protocol. By that, it violates the layer model, by providing transport layer protocol functionality within the application layer protocol.
На байбите всегда курс меньше?
Why does the http (an application layer) need to care about the physical connection (switching from cellular to WiFi for instance) - should that not be handled lower in the network stack?
the QUIC connection ID is not network level, it's an high level abstraction that could be thought of as some sort of "session ID". When the network stack switches IP addresses, for exemple by losing Wifi signal and switching to 5G, the lower parts of the network stack establish a completely new connection but the application layer uses the connection ID to continue the data transmission as nothing had changed (almost). So, yes, it's an application layer abstraction that allows for better decoupling with the underying network stack.
You need to reestablish connection when the underlying physical connection changes. Because, very likely, in this case, your IP also changes. If you're in the middle of loading during this or in constant connection, it'll likely be disconnected because now the server identifies you as a different client. Having an application layer connection ID means the server can still identify the same client connection even when the underlying address changes.
@@bltzcstrnx ah, that's true, I'd forgotten that even the MAC address changes because it's different hardware
Usually yes, but the experience can be disruptive. Files stop downloading and have to be resumed, chat sessions drop, calls freeze, and the user is left annoyed while everything re-establishes from scratch - which can take a frustratingly long time, tens of seconds. Handling it at transport layer means the application layer can carry on mostly unaffected. Not entirely, but having your video call stutter and glitch for a few seconds is a lot better than disconnecting and having to call back.
And yes, video calls often run over HTTPS. Yes, this is dumb. But it has to be that way, because if the customer is on an office or school network the firewall is almost certain to block everything by default - https is the only protocol you can be sure will actually get through all the time. This is why we invented Fucking Websockets, the bane of the web-filter admin's life.
@@vylbird8014 isn't most web video calls using WebRTC? Which itself uses SCTP for the underlying transport. Also, most real-time mobile applications nowadays are quite good at handling network changes. I've played real-time games on mobile, and it doesn't have a problem switching from wifi to cellular and vice-versa.
thanks you for sharing
the move to UDP was motivated by the pain of trying to edit TCP in any way. You can just send UDP packets and not do any assembly of them in the kernel; just have the applications do it. And in order to prevent firewalls from messing with it, it's inside of TLS.
It's a little bit sillier even than that. The move to UDP, specifically, was because there can never be another protocol at transport layer now. We have TCP, UDP and ICMP - and never may there be another, because we have address translation and firewalls all over the place now. Not that people haven't tried - multipath TCP, SCTP, UDP-lite. They all hit the same problem: Unless the firewalls and address translation all along the route are configured to handle them, they don't work, and that just isn't realistic in most cases. So we are eternally stuck with only TCP and UDP.
@@vylbird8014 the one who designs UDP is a forward thinker. Even with TCP and UDP are cemented, people can just use UDP when designing new application layer protocol.
@@vylbird8014 there is no reason that we need TCP to be in the kernel, given that UDP exists. make everything UDP sent to userland, and userland has TCP as a library. network accelerators basically hand off an entire ethernet card to a userspace program now; to keep it from bothering the kernel. it's like a 10x speedup.
damn these animations has got sth to say !
thx bro!
Thanks ❤🎉
Very good conntent
Полезная подсказка, помогло.
really nice !
Techncial SEO ❤
wow thanks!
I can't remember any more
good video 🙏
1997 sounds dreamy. Having no HTTP Status Codes would eliminate countless hours of pointless navel-gazing debate. It's funny (in a tragic way) how StackOverflow questions about HTTP Status Codes often have multiple directly conflicting incompatible answers each with hundreds of upvotes. Whatever your interpretation of HTTP Status Codes is, it's wrong.
HTTP status codes are important in many scenarios. For example, cases where you browser will prompt you for credentials. Or for cache validation responses or redirects. If you send the wrong response code, the browser will not handle it properly. Yeah there are cases where it isn't clear or meaningful on which should be used when but that doesn't mean they aren't valuable.
@@username7763 1) You're talking about errors closer to the "protocol" level, and HTTP Status Codes at that level make sense especially when you consider that the "P" in HTTP stands for "Protocol".
2) The problem with HTTP Status Codes is when it comes to "application" level errors. The 400 Bad Request code gets argued over ad nauseam. It's so tiresome watching other developers fight over this topic.
@@DemPilafianbro "кто на работу ... ложил, тот до пенсии дожил"
@@DemPilafian HTTP status code is necessary. How do you know the request is OK before parsing the payload without it? Many HTTP clients check the status code even before touching any of its contents. This is especially true for many REST clients. These clients only proceed if the status code met their expectations. People can debate all they want, it doesn't reduce the importance of HTTP status codes.
@@bltzcstrnx That's like saying if you don't like Microsoft you obviously despise computers and want to go back to stone tablets.
QUICK ist not a protocol or something brand new or invention. It just reliy on good networks. Old good UDP did it decades ago. If network is bad QUICK sucks.
QUIC is Google's NIH of TCP :)
Use SCTP? Will develop own protocol!
Status Code Transfer Protocol?
@@maighstir3003 "The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability.
SCTP is standardized by the Internet Engineering Task Force (IETF) in RFC 9260. The SCTP reference implementation was released as part of FreeBSD version 7, and has since been widely ported to other platforms." Wikipedia
@@maighstir3003 Stream Control Transmission Protocol RFC 9260
Dejon Shores
you're the best
Present sir
Currently it's not possible to use QUIC protocol for a next.js project, right? 🤔
I like the graphics.
Please work on the enunciation. "Unlike" needs K, "side" needs D, "assets" needs T, "lets" needs T,... Those issues made the video hard to follow purely by acoustics, not content.
Why the intro feels like AI generated
Adams Crossroad
Weimann Locks
guys !!!! How did u make this vedio?
👍 👍 👍 👍
HTTP/3 IDs = easier tracking.
Remember, HTTP/2 and HTTP/3 were pushed primarily by the ad industry.
🙂👍🏻
A General Design Flaw of the Internet is that there is no secure Packet handling. You either have insecure Packets (UDP) or Secure Streams (TCP).
In Many Situations, you want Packets, that are resend over a statefull connection, but tolerate a random order.
It is not a design flaw. Tolerating random orders but needing guaranteed delivery are contradicting. In other words, it is not a precise requirement. How long can the app wait for a package to arrive and process other packages sent later? How can it signal that a package has not arrived if it is not a stream?
If you have some specially tuned needs between the TCP-implemented stream and the UDP-implemented package, then you need a particular application-level protocol implemented using UDP. That is exactly HTTP/3.
cure
liked it
Still use 1.1 on all my servers as on 2.0 many websites and applications don't work.
"many websites and applications don't work on 2.0" - you
such as? lol http acts the same as 1.1 for 99% of things
@@gg-gn3re It was as I said. On my last try the majority of sites were slower and some didn't work at all.
@@AncapDude such as?
@@gg-gn3re WordPress Sites, Shops, other PHP Web apps.
@@AncapDudeCris, you must revisit this. Http 2 is implemented by the browsers for years and cluent code does not change at all, 100% backward compatible. So if you are worried about that the clients will not work: don’t. Also, the server will revert back to the old version when the client cannot handle higher than 1.1
As on the server side: you just have to install the new version of the server, and your application will just work because the http api is the same on the server side as well.
If your app does not sork on the new version of the server (nginx, Apache or some other server) it means they are very old, outdated and are probably missing also secirity patches.
🙂🙂
@bytebytego I'm confused in few things.
What is protocol? I mean was it algorithm and someone converted it into code or what it is?
How it works under the hood
A protocol is a set of agreements. So if both the server and browser use these agreements, everything should work fine. It doesn’t say anything about how it should be implemented or which code to use. A protocol might refer to algorithms like BASE64, but how you implement this is up to the developer.
@@EdwinMartinAfter saying that, BASE64 is not an algorithm. It is a representation format, a kind of protocol per se, how to represent binary data using a limited character set. How my code works, what it does, converting binary data to a character array and back is the algorithm.
Talking about http protocol, it uses TCP network socket for data communication meaning the TCP guarantees the order of data packets sent.
@@EdwinMartin is code a physical thing?
It's not the algorithm itself. It's a description of procedure, just like in any formal process between two parties - what to say, when to say it, which documents to expect and when. The technical version of a business process. Except that computers are the most inflexible and stubborn of bureaucrats, so the protocol needs to define every last aspect in exacting and unambiguous detail.
HTTP feels like something that was designed as an undergraduate college project.
@@mensaswede4028 it's originally designed by a CERN engineer to facilitate sharing of research papers.
@@bltzcstrnx I’m not surprised. Everything about it feels amateurish, like it definitely was not done by a professional software engineer for scalable production purposes.
@@mensaswede4028 You sound very arrogant. Have you considered that Tim Berners-Lee had different requirements in 1990/1991 and scalability was not even a buzzword yet? If you want to compare it to the work of professional software engineers of the time, maybe take a look at CORBA, which was invented at about the same time. Frankly, I take the simplicity (and naivety) of HTTP/0.9 any day.
@@joergsonnenberger6836 My point exactly. It’s a protocol that’s being used for a purpose for which it was never designed or intended.
@@mensaswede4028 That doesn't make it a bad design. In fact, much of the complexity in modern versions are a direct result of badly designed systems. Open a random web page and you will see hundreds of requests to dozens of domains. Most of that can be avoided by a well designed web page. It's just that most people don't care. Ads are even worse. That's most of the justification for the complexity of HTTP/2. Now look at each request. How many cookies does a typical website need? All that junk is unnecessary in a well designed system, but that's not in the interest of Google. Instead we optimize the protocols to have somewhat more efficient encodings of junk.
HT3P
HTTP2 & 3 is like IPv6, i'll probably stick to the old versions for as long as possible.
At least now i know what differences there are between them.
There’s really no reason to postpone using HTTP 2.0
I doubt support for at least 1.1 will ever go away.
Honestly, if all you've got is a simple, static web page, HTTP 1.1 is all you need.
The features available in 2.0+ are really more intended for sites that require a more active, constant connection.
You are using the new version without realizing it. The browsers support it without telling you. The servers need only version upgrades. The apps do not need to change. Programmers are using new versions without knowing it.
HTTP2 is already the most popular protocol for some time. The share of HTTP2 + HTTP3 is ~70% of internet traffic.
The thing i learned recently is that:
We know that our devices connected to WIFI share the same external ip address but have local ip addresses, but i didn't know that we also share the same external ip address with other ISP customers, they do that because of limited number of IPs in ipv4. The technique is called *Carrier-grade NAT*
Edmundo Urrutia es un cobarde y un mentiroso.
No web dev thinks about http3. Its a NFT but for web!
The best thing about HTTP 2 and HTTP 3 is that we aren't using them.
C'mon, let's see Byte Byte Go get to 1 million subscribers! Getting closer! 😎✌️
with such amount of misinformation less likely
ByteByteGo!!!