Internet Congestion Collapse - Computerphile

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 198

  • @GrimmerPl
    @GrimmerPl 2 роки тому +64

    Great TCP "dynamic window" explanation with a history lesson and explanation not only how it works but why we needed it in the first place.
    Much, much better than in CCNA books.

  • @zelllers
    @zelllers 2 роки тому +7

    TCP congestion control is one of my favorite topics in the entire world. Even today we're still trying to modernize it, there's been some great work with BBR

  • @EmptyGlass99
    @EmptyGlass99 2 роки тому +62

    I love the way Dr Clegg explains this and interacts with his audience. Great teaching.

    • @I_leave_mean_comments
      @I_leave_mean_comments 2 роки тому +1

      I need to listen to him at 1.5 speed minimum... or else his awkward pausing drives me insane.

  • @Phlarx
    @Phlarx 2 роки тому +4

    I had the opportunity to work with Mike Karels (one of the coauthors of that paper) a few years back. He's a great role model... humble and eager to help out the less-experienced programmers (like me). He'd probably be happy to speak with the Computerphile team, if they wanted to connect.

  • @chrisknestrick374
    @chrisknestrick374 2 роки тому +17

    I remember reading - and re-reading - that paper in grad school. Truly a seminal paper and an EXCELLENT job of describing it!

  • @oresteszoupanos
    @oresteszoupanos 2 роки тому +70

    These days, with 32 kbps you can have a very fine phone call, but that's how far lossy audio compression has come. Opus codec for the win :-)

    • @13cbt13
      @13cbt13 2 роки тому +22

      I didn't believe you and I did some research. Listened to a conversation at 16kbps and it was crystal clear. Opus codec for the win indeed.

    • @JoshWalker1
      @JoshWalker1 2 роки тому +2

      indeed

    • @BinaryCounter
      @BinaryCounter 2 роки тому +12

      Opus is amazing. You can get very intelligible speech down to 6kbits. I used this to listen to podcasts over 2G internet. I had a server that I would connect to through SSH and then instruct to download videos or podcast from various sites, convert them to Opus 16kbits and put them on a webserver. I would then use VLC on my phone to stream those over a 32kbit 2G connection and listen to my favorite creators that way. Desperate times hehe.

    • @kvatikoss1730
      @kvatikoss1730 2 роки тому +2

      @@BinaryCounter I've been thinking of making a custom Spotify like this

    • @unfa00
      @unfa00 2 роки тому +3

      Opus is absolutely amazing. It's hard to imagine there will ever be anything better invented for lossy digital audio compression. It can also use extremely small buffers for minimal encode/decode latency and change all of it's properties smoothly mid-stream - it's perfect for voice chat, especially in multi-year games where you don't want to use too much bandwidth, or it could delay the most important game update packets.

  • @dreamzens
    @dreamzens 2 роки тому +19

    I've been learning about this in class this quarter, it's such a cool concept. Currently, our class is working on a project to make UDP reliable with congestion control too. So awesome to see a Computerphile video on it as well! Thank you Dr. Clegg!

    • @richardclegg8027
      @richardclegg8027 2 роки тому +3

      You might want to look up QUIC if you did not already.

    • @RussellTeapot
      @RussellTeapot 2 роки тому +2

      Wait what? Don't take this as an harsh comment, I'm genuinely interested and don't know much about communication protocols, so bear with me: isn't this kind of "reinventing the wheel"? As far as I know, UDP by design doesn't care about reliability and congestion, just "multiplexing", in the sense that using the concept of ports multiple applications can communicate through the same connection. By adding the other features (typical of TCP) don't you make it more heavy and defeat is purpose?

    • @richardclegg8027
      @richardclegg8027 2 роки тому +5

      @@RussellTeapot so Google's QUIC basically uses UDP and implements reliability features at the application layer. They can use some tricks to then get improved performance versus TCP. When I type this comment it is most likely being sent over that style of connection. A high proportion of traffic to/from Google owned services does this.

    • @dreamzens
      @dreamzens 2 роки тому +10

      @@RussellTeapot It's just a class-assigned project for learning purposes to understand the underlying mechanisms of TCP, so the intention is to "reinvent the wheel", so to speak. You are right, though. It's for the sake of pedagogy, I suppose.

    • @dreamzens
      @dreamzens 2 роки тому +6

      @@richardclegg8027 We briefly discussed QUIC with HTTP/2.0 in class and it seemed quite interesting. I will do more research on it, thank you!

  • @artiem5262
    @artiem5262 2 роки тому +4

    Thank you for discussing this classic and important paper -- we need periodic reminders on these issues, as the problems keep re-appearing when the long-ago solutions are forgotten. Another I'd like to see (from the dark ages) is Denning's Working Set Model for Program Behaviour from the late 60's, when computers were still huge beasts that remembered using little donuts made of rust.

  • @Lumaraf1
    @Lumaraf1 2 роки тому +33

    Thanks for the nice explanation. Could you also do a video on quic and how that handles congestion differently?

    • @richardclegg8027
      @richardclegg8027 2 роки тому +28

      QUIC is super interesting for sure. I thought this background was necessary first though - otherwise I need to explain TCP and QUIC.:)

    • @SakarPudasaini10
      @SakarPudasaini10 2 роки тому +2

      @@richardclegg8027 Please do make a video on QUIC. How acknowledgements, packet loss, flow control and congestion control works with QUIC.

    • @SakarPudasaini10
      @SakarPudasaini10 2 роки тому

      ++Global Synchronization

    • @aande1
      @aande1 2 роки тому +2

      @@richardclegg8027 Yes, please make a video on QUIC. I'd be very interested as well!

    • @mrxmry3264
      @mrxmry3264 2 роки тому

      thing is, i hadn't even heard about quic until i started using syncthing, which explains why i don't know anything about it.

  • @bluegizmo1983
    @bluegizmo1983 2 роки тому +12

    The early days of the internet were so fascinating! I was born in '83 and got my first computer around 1995, it had a Pentium CPU clocked at a whooping 133 MHz. I remember dialing into BBS boards and accessing the early internet via AOL and getting kicked off the internet because someone picked up the phone and buying my first 1GB 5.25" hard drive thinking there's no way I'd ever be able to fill that much space 😂

  • @kentw.england2305
    @kentw.england2305 2 роки тому +12

    Van had to have his arm twisted to publish because journals didn't want articles that cited an email. Now it is one of the most cited articles in internet research.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +3

      Yes. I only met him once but he seems a very modest guy.

  • @RonJohn63
    @RonJohn63 2 роки тому +4

    I remember Van Jacobson header compression in PPP configurations.

  • @ingolfstraube8433
    @ingolfstraube8433 2 роки тому +9

    Wonderful explanation. I've wondered in the early dayes, how they cept IT working

  • @laurendoe168
    @laurendoe168 2 роки тому +8

    What I find interesting is that in addition to a missing packet, sometimes there are errors in the received packet. Sometimes these errors can be corrected, and sometimes they cannot. When they cannot, the packet needs to be resent. This factor was not mentioned.

    • @autohmae
      @autohmae 2 роки тому

      yes, a lot of the time in those days not whole packets were lost, but bits of packets

    • @richardclegg8027
      @richardclegg8027 2 роки тому

      Yes --- there are a lot of mechanisms I didn't really have time to go through including things like duplicate ACKs (which is the mechanism for what you mention, a packet that fails to checksum at the receiver). When I teach the topic throughly it's about 10 hours of lectures.

    • @laurendoe168
      @laurendoe168 2 роки тому

      @@richardclegg8027 Thank you for your reply. Eagerly awaiting the videos! (Although I'd love them, I do realize how much work it would be so I kid instead).

    • @richardclegg8027
      @richardclegg8027 2 роки тому

      @@laurendoe168 Happy to get the feedback. There's a lot to say about TCP for sure.

  • @kesslerdupont6023
    @kesslerdupont6023 2 роки тому +1

    Thanks for the Greta video! I have seen my network bandwidth throttle in task manager in a similar way to the curve described and I always wondered why.

  • @johseh5312
    @johseh5312 2 роки тому

    I like this guy, he's got a jazzy manner of communicating. There's pep to it.

  • @squelchedotter
    @squelchedotter 2 роки тому +10

    Would love to learn a bit about BBR, a proposed replacement for this algorithm!

    • @richardclegg8027
      @richardclegg8027 2 роки тому +6

      BBR is a TCP flavour - so it uses these mechanisms but in a clever way. At heart though it is still a window increasing and decreasing with congestion.

  • @kvetter
    @kvetter 2 роки тому +2

    My understanding is that there is a problem with slow start vis-a-vis web browsing. Slow start works great for connections which are open for a long time--the delay caused by slow start isn't noticeable. But web browsing usually involves lots of short connections for which slow start is a significant problem.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +1

      So there's lots of work on this, some approaches include TCP "Fast open" and "QUIC".

  • @thomasbonse
    @thomasbonse 2 роки тому +2

    I've actually had worse speed tests when my only ISP option was Comcast. I would generally only able to get 37bps with 75% packet loss on top of that. And that was in 2012-2013 in the DC metro area!

  • @BobFrTube
    @BobFrTube 2 роки тому +1

    NIce - I had assumed that back off was in the original implementation. Have you done a video on the modern version of this problem -- buffer bloat?

    • @richardclegg8027
      @richardclegg8027 2 роки тому

      Backoff is also part of the solution and backoff techniques are in the paper too. I don't think there is a computerphile on bufferbloat -- would be an interesting one.

  • @SeeonX
    @SeeonX 2 роки тому +2

    Can you please do a video about when Covid started a lot of areas ISPS in US and EU region hit prime time issues. Like between 5PM to 11PM areas bandwidth would drop in download speed. This was a huge issue but no one really talked about it.

  • @kevintedder4202
    @kevintedder4202 2 роки тому +1

    Can you do a further video on the unintended effects of this congestion avoidance, such as TCP synchronisation, and how this can be avoided?

  • @mrxmry3264
    @mrxmry3264 2 роки тому +1

    8:07 that is how the old xmodem protocol worked, isn't it? yes, i did some BBS stuff back in those days, first using an acoustic coupler (300 bps, yawn), later using a series of increasingly fast modems...
    man, this brings back memories...

    • @schifoso
      @schifoso 2 роки тому

      XMODEM was very basic and just sent a packet after every ACK. ZMODEM was probably the best as it allowed for streaming, file names and sizes, batch sends, packet size adjustments due to poor line quality, etc.

    • @mrxmry3264
      @mrxmry3264 2 роки тому

      @@schifoso
      i don't remember exactly how they all worked, but i do remember that ymodem was better (faster) than xmodem, and zmodem was better than ymodem. and then there was the hardware or software handshake between the modem and the computer...

    • @adfaklsdjf
      @adfaklsdjf 2 роки тому +1

      @@mrxmry3264 If only the alphabet provided letters faster than Z :(

    • @jwydubak9673
      @jwydubak9673 2 роки тому

      It is not a problem for xmodem because the link between two machines is direct and it is impossible there to be more than one packet in-flight at any given time. Whereas in a network there may be roughly as many packets in-flight as there is routers between two machines.

    • @rfvtgbzhn
      @rfvtgbzhn Рік тому

      @@adfaklsdjf they could have used the first version AMODEM instead of XMODEM, then they would have had enough letters left.

  • @gabrieldesantanalacerda
    @gabrieldesantanalacerda 2 роки тому +13

    That was so good! I have a question: does these tricks are today implemented automatically by a network protocol or are those a set of practices that the ISP needs to configure manually?

    • @danielhartley13
      @danielhartley13 2 роки тому +15

      TCP congestion control is implemented by the protocol itself. Its end-to-end driven, rather than network assisted. This is because internet protocol doesn't pass congestion information to TCP, so TCP uses its own end-to-end driven method where it inferes the network congestion state itself using backoff timers, or 'probing' as the guy in the video puts it.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +4

      TCP is implemented at the end computers (sender and receiver). It is part of your operating system. Depending on your OS you can tweak exact details.

  • @MrKristian252
    @MrKristian252 2 роки тому

    I really love how he talks and explains

  • @kufena
    @kufena 2 роки тому +4

    In my bit of Norfolk, sometimes, 40bps would seem good.

  • @adfaklsdjf
    @adfaklsdjf 2 роки тому +1

    So when I'm downloading a large file, the steady state is actually the sender is constantly ramping up then halving its send rate, "bouncing off the limiter" as it were? The rate looks so much more steady on my end..

    • @metaMichi
      @metaMichi 2 роки тому

      The queues (the buffers) gets drained in the meantime (after the multiplicative decrease of the CWND). If the buffer is large enough (> 1 BDP) it can keep the delivery rate up.

    • @ciarfah
      @ciarfah 2 роки тому

      This reminds me of an 'issue' I had transferring files to a slow USB drive on a Linux machine. The file transfer would show 100% then hang because the file would be fully buffered on the machine, but not yet transfered to the flash drive over the slow bus.

  • @andljoy
    @andljoy 2 роки тому +1

    I have a question for Dr Richard Clegg. What would cause packet loss on a network that increases and decreases in a almost prefect sign wave with peaks every 10 mins ( around 7% loss ). I believe its a buffer that is filling up on the main router causing it. The clients in turn reduces the TCP frame size to a point where the speed on the network drops and the router can then handle it. The frame size then slowly increases again and the problem repeats.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +2

      Hmm... Honestly that periodicity is very slow. Most things operate on a quicker time frame than that. (Excuse me for asking could there be a milisecond microsecond confusion - if the timescale was much quicker the behaviour would be more normal). I guess you have grabbed a pcap and looked at it through wireshark to determine this behaviour?

    • @framegrace1
      @framegrace1 2 роки тому

      Industrial environment? Check for interferences....

  • @And_Rec
    @And_Rec 2 роки тому

    what if it’s an ack that get lost? did i miss it?

  • @jwydubak9673
    @jwydubak9673 2 роки тому

    Can we have a description of ECN (Explicit Congestion Notification) protocol in one of future videos?

  • @shez666
    @shez666 2 роки тому

    I'm glad he did correctly point out that the TCP window is based on bytes not packets as he kept saying but disappointed it took 15 minutes

  • @calxier
    @calxier 2 роки тому +3

    This seems to rely on cooperation of all endpoints on the network to implement the same multiplicative decrease. Are there additional built in protections against a "bad actor" who tries to keep sending packets as close to the bandwidth limit as possible?

    • @ГеоргиГеоргиев-с3г
      @ГеоргиГеоргиев-с3г 2 роки тому

      I don't know how the protocol works, my hunch says the problem would be easy to spot, if it is a direct connection, it would be obvious you are not using the protocol and could get cut(if the middle node gets bothered with you, or just drops every however many of your packets to go up to that amount, after all internet uses a handshake, you can't just opt out of an agreement without the other side noticing), otherwise it would be out of your control so you get what you get even if you request for more, so it will be only a one layer deep problem, if i have to guess.

    • @TomStorey96
      @TomStorey96 2 роки тому +4

      The routers in the network can also implement various "Quality of Service" schemes, some of them very complex, which can take into account an individual hosts traffic patterns and limit their overall bandwidth.
      But the more complex things get the more memory and processor intensive it gets on the router, and there are limits to how much complexity or how many hosts you can manage.
      Out of the box I seem to recall that Cisco routers, at least at one stage, would have several queues per interface that packets would be placed into based on a hash, and then those queues were processed in a round robin fashion, such that one busy host does not deny other hosts some packet forwarding time. It's not perfect because one host can still affect others that have the same hash, but overall the effect is minimised for the greater majority.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +1

      UDP can do exactly this. If you wished to configure your machine to just push out as many packets as you can nothing stops you.

    • @kentw.england2305
      @kentw.england2305 2 роки тому

      This requires the routers to get involved. Read up on random early drop.

    • @richardclegg8027
      @richardclegg8027 2 роки тому

      @@kentw.england2305 sure RED BLUE etc are designed to send loss based signals to well behaved TCP senders to make them back off. But a poorly behaved sender just keeps sending ignoring the drops.

  • @9_-_-_-_-_swo
    @9_-_-_-_-_swo 2 роки тому +2

    i think brady has osmosis'd a computer science degree at this point, his suggestions are almost always on point

  • @DrRasputin2012
    @DrRasputin2012 2 роки тому +2

    I guess that this only works if every TCP implementation honours it - what if one implementation chooses to be greedy? How do you protect against that?

    • @kentw.england2305
      @kentw.england2305 2 роки тому +1

      The routers get involved. It's called queue management.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +7

      Really there is no protection. This is called the "TCP fairness" problem. You could in theory tweak your own settings to be more greedy (back off slowly push others out).

  • @andrewharrison8436
    @andrewharrison8436 2 роки тому +2

    Yesterday my internet just worked - now whatever I do I will know that the packet rate is modulated by congestion control.
    Well, I think that's worth knowing.

  • @MalongaModeste
    @MalongaModeste Рік тому

    Can you do a video on how TCP/IP work please Dr.

  • @LimitedWard
    @LimitedWard 2 роки тому

    I'm wondering, why couldn't the intermediate nodes on the network look at the window header, compare against the current state of its own receiver buffer, and then modify the window to be the lesser of the two values? This assumes the bandwidth is limited by the overall bandwidth of all the receiver buffers in the network. That seems like a more deterministic approach than this kind of guess and check.
    Edit: I realize why this wouldn't work. The traffic speed would then be limited to the max speed of the slowest intermediate node.

  • @aazimmermann
    @aazimmermann 2 роки тому

    Thanks for the video! Just wondering if collision avoidance is still in use today and if congestion is still a common problem which results in packet loss?

    • @richardclegg8027
      @richardclegg8027 2 роки тому

      Sure -- Carrier Sense Multiple Access/Collision Avoidance is basic to how your WiFi works. Try: "WiFi's Hidden ____ Problem - Computerphile" on youtube. Steve Bagley has a great explanation. Congestion can still occur though. Modern WiFi recovers loss on the local link so the loss is typically not there. However, modern WiFi is high bandwidth so the congestion is elsewhere on the network. You can get (say) 500Mb/s through your WiFi but somewhere on path will be a slower link (say at a congested router mid network). That is where the congestion happens and packets are lost.

  • @LeeSmith-cf1vo
    @LeeSmith-cf1vo 2 роки тому

    Is this window size calculated per remote host:port? If not, wouldn't firewalls that drop packets play havoc with the algorithm?

    •  2 роки тому +3

      It is done for each TCP connection individually. A TCP connection is defined by its source and destination IP address and port numbers. The algorithms used will account for most kinds of bottlenecks in the network, firewalls being just one of them.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +1

      Each TCP sender tracks its own as Orjan says. If a firewall drops your packets you slow down. (But it would be unusual for a firewall to drop middle packets for a flow - usually they block a flow or do not, it does not usually make sense for a firewall to kill only some of a connections packets).

  • @carvoloco4229
    @carvoloco4229 2 роки тому

    That graph with steady ups and sharp downs reminds me of the bitrate graph I get when I copy a large number of files from one location to another, especially if the destination lies on an external drive. Is it possible that the OS is using somewhat similar congestion contention algorithms to allow the use of shared IO resources by many processes in parallel at the maximum possible rate?

    • @richardclegg8027
      @richardclegg8027 2 роки тому +1

      It will be using the TCP algorithm almost certainly.

    • @carvoloco4229
      @carvoloco4229 2 роки тому

      @@richardclegg8027 Well, I doubt so. But I've seen nothing in the congestion contention algorithm described that requires it to be applied exclusively on TCP connections; it seems to me that it could be applied whenever an uncoordinated set of information providers all try to push their data through the same shared channels expecting some sort of acknowledgements in return. There are plenty of shared information channels within a computer, both physical (e.g. the bus) and logical (e.g. a memory buffer held by a disk driver). And multiple processes running simultaneously, many of which may try to make use of the same channels at the same time, knowing nothing about each other. Data fragments cannot be stuck in a router inside a PC, but they can certainly be stuck in queues, so algorithms that allow the OS to maximize the throughput are certainly desirable. Now, the graph with steady ups and sharp downs shown on the video does remind me of the throughput graph that my computer displays when operating with large numbers of files, which makes me wonder if the lessons learned in the eighties were useful not only to solve the network congestion problem but also other congestion problems in different situations.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +2

      I am describing TCP in the video which covers about 85% of internet traffic. Until recently it was almost the only choice for reliable data transfer. A few years back Google found a way to run TCP like algorithms over QUIC doing the congestion control in the browser. That is mainly applied between some web browsers and Google owned web sites.

    • @richardclegg8027
      @richardclegg8027 2 роки тому +2

      What I think you are talking about here is internal buses in PCs. They don't really need this same kind of protocol as they (a) work on a known fixed bandwidth and (b) can be coordinated in other ways.

  • @syntaxerorr
    @syntaxerorr 2 роки тому +1

    Great video. Thanks.

  • @phutureproof
    @phutureproof 2 роки тому

    Im so glad you cut away to those shots of yourself drinking tea and nodding /s

  • @jaybrooks1098
    @jaybrooks1098 2 роки тому

    That iis bug around 2004 degraded the internet a lot too

  • @kipp14
    @kipp14 2 роки тому

    I now wonder if this might be the fatal problem with the current philosophy that us ISPs have with their cost benefit analysis where the packet drops are more to do with bad route choice on a given bundle for a certain bandwidth and less to do with the total amount of bandwidth available. I feel like at some point that the faster you confirm receipt the less congestion you have and the more reliable the connections become

  • @elraviv
    @elraviv 2 роки тому

    It's a very good video. Do a follow up with "silly window syndrome" that will be fun.

  • @I.____.....__...__
    @I.____.....__...__ 2 роки тому

    1:04 Nobody would _want_ to make a phone-call through the Internet in 1995, they'd just use a normal phone. The Internet was still for transferring files, which were still relatively small at the time (though I distinctly remember staying up all night a few times, trying to download files 😕).

  • @network_king
    @network_king 2 роки тому

    Interesting, should do one on like CSMA/CD, CSMA/CA. I would love to see one on Radia perlamand and spanning tree.

  • @kelvinluk9121
    @kelvinluk9121 2 роки тому

    So does it mean if I wanna enjoy the bandwidth I was promised by ISP, I should go shut down neighbors' network for a larger congestion windows right? :)

    • @richardclegg8027
      @richardclegg8027 2 роки тому +1

      Well it could work (if they are on the same ISP). Your neighbours might not enjoy it.

    • @adfaklsdjf
      @adfaklsdjf 2 роки тому

      Technically yes. There are problems with this approach but they aren't technical.

  • @bs_blackscout
    @bs_blackscout 2 роки тому

    please more networking videos!!

  • @perrylund3995
    @perrylund3995 2 роки тому +4

    Will be using in my college Data Communications class as supplement.

  • @lucidmoses
    @lucidmoses 2 роки тому +2

    Doesn't the "Cloud" come from the military pre-electronics. When a battle started, amongst all the smoke from poor gun powder a message was sent via carrier pigeon.

    • @adfaklsdjf
      @adfaklsdjf 2 роки тому +1

      Not sure, but I'm confident that it pre-dates wide use of the term "cloud computing".. I've been drawing the internet as a cloud on diagrams for more than 20 years..

    • @melanierhianna
      @melanierhianna 2 роки тому

      Is that why there is an RFC for IP via carrier pigeon, so it can properly use the cloud.

    • @lucidmoses
      @lucidmoses 2 роки тому

      @@melanierhianna Yes, my brothers internet feels like that out on the farm.

  • @martingriffiths9851
    @martingriffiths9851 2 роки тому

    Love these vids and ths one inparticular. Try running it at 1.25 times speed for optimal effect !!! ;)

  • @matthewparker9276
    @matthewparker9276 2 роки тому

    I've seen my own internet drop as low as 40 bps, for hours at a time.
    But that's Australian broadband for you.

  • @zxuiji
    @zxuiji 2 роки тому

    Why not just start with all packets and move straight into the halving after... assuming there was missing acknowledgements that is, as for the data itself rather than (if I remember rightly that is) resending everything in the event of a fail, just send those that were missed, it's easy to setup after all as you can just do something like this with linux based server:
    Step 1: launch a dedicated process for the connection
    Step 2: process creates folder ~/connections//
    Step 3: process create all packets expected to be sent in said folder naming them .packet (since process memory might not be enough)
    Step 4: process sets up an internal buffer of values to hold the acknowledgement state of each packet
    Step 5: process creates a pool of threads for as many packets as can be handled
    Step 6: each thread sends it's own packet & waits on acknowledgement, setting it's acknowledgement value to 1 if it does get acknowledged, leaving it if it doesn't
    Step 7: main thread determines if any packets need to be resent or still need to be sent and re-uses the pooled threads for those packets while doing the mentioned congestion control (in other words loop back to step 6 until all have been acknowledged or some abandon condition is triggered)
    Step 8: empty the folder of every *.packet file and exit
    Step 9: server re-uses the POOL_ID for the next connection it is handling
    With the above it is not necessary to re-generate the packets, just send what is not acknowledged over and over until the acknowledgement is retrieved or abandon it midway for whatever reason, it's the clients job to put them in the right order using the sequences after all

    • @richardclegg8027
      @richardclegg8027 2 роки тому +2

      If you start by sending fast each new connection causes problems. Also worth noting writing to files can be pretty slow so you want to avoid any file writes here.

    • @zxuiji
      @zxuiji 2 роки тому

      @@richardclegg8027 Well the files could be avoided with a memory check via attempted realloc but as far as the "each new connection causes problems" it's gonna do that anyways when incrementing so why not just get it over with from the start and see whether those problems actually occur, if they don't then great, if they do then if we're lucky at least some got through and that will reduce the number of packets we next need to send as it's the servers in between that will lose the packets after all

    • @richardclegg8027
      @richardclegg8027 2 роки тому +1

      @@zxuiji there is a huge amount of research into the ideal starting window size. The problems caused by slightly going over bandwidth with a modest increase in windowsize are much less than the problems caused by going hugely over bandwidth by starting too aggressively. Basically did you dump 100 packets the network could not handle or just one. (You'll also cause knock on problems for other traffic on your own network.) If you want to know current state of the art TCP CUBIC or TCP BBR are where to look. It also depends on setting (data centre vs general internet).

    • @zxuiji
      @zxuiji 2 роки тому

      @@richardclegg8027 You're halving at each failure anyways so why not just try all out at the start and see if you need to halve the amount you send?

    • @richardclegg8027
      @richardclegg8027 2 роки тому +2

      @@zxuiji that was part of the original problem solved by this paper. Lots of new connections happen all the time on a well functioning network. If they all start high the result is congestion collapse. But don't take my word for it. Read the paper.

  • @cadekachelmeier7251
    @cadekachelmeier7251 2 роки тому +7

    Ack

  • @rolandtennapel5058
    @rolandtennapel5058 2 роки тому

    The internet is represented as a cloud because it is nebulous; You know somewhat what's going on in there, but it's such a jumble it's next to impossible to map it. 😉 You can pull a cable out of a server, but the 'traffic controllers' won't know immediately that that server is no longer available, and that principle applies to those controllers between themselves as well, of course. So in a very real sense it's like looking into a cloud or mist, but the several waves to indicate a mist could be confused as 'somewhere between; 1~2', an unary programming symbol, a flow or a volume. Different eyes look differently at such symbols so a cloud makes the most sense.

  • @asandax6
    @asandax6 2 роки тому

    I came to this video because I've been researching ways to bring down data usage. In my country South Africa data is really expensive and the infrastructure is really poor.

  • @olivier2553
    @olivier2553 2 роки тому +1

    Bandwith of the classic phone was about 3KHz, that is it needed 3kbps. 32 k would allow 10 phones calls simultaneously. That is why ADSL was working: phone only needed a very small portion of the bandwidth that could be carried on a copper twisted pair.

    • @kesslerrb
      @kesslerrb 2 роки тому

      What codec gets a phone call down to 3Kbps? G.729 compresses calls down to ~6Kbps but I haven’t seen any that can reliably support calls with less bandwidth

    • @olivier2553
      @olivier2553 2 роки тому

      @@kesslerrb No codec, just normal analog phone, like it has been used for 100 years :) Phone has been designed to use only 3KHz bandwidth because it covers most of the spectrum of human voice.

    • @Phroggster
      @Phroggster 2 роки тому

      @@kesslerrb The voiceband of traditional plain old telephone service (POTS) was roughly 300-3,300 Hz bidirectional. The signals were confined to this range via analog filtering, no codecs required. Just a very restrictive bandpass filter at both ends, and several more along the connection path.

    • @WobblycogsUk
      @WobblycogsUk 2 роки тому +4

      So much confusion here. The bandwidth of a standard voice only phone line is about 3kHz and operates at the regular voice frequencies. The data rate possible on that line is given by the Shannon-Hartley theorem which is dependent on the signal to noise ratio of the line. It was initially assumed that phone likes would be about to achieve data rates of about 33kbps but development of the phone network resulted in better signal to noise ratios which allowed for higher speeds such as 56kbps. In theory they could probably have gone a little faster but the world switch to ADSL. ADSL uses basically the same ideas but it operates at higher frequencies which, in turn, give it more bandwidth to play and so it achieves better data rates.

    • @olivier2553
      @olivier2553 2 роки тому

      @@WobblycogsUk Yet, you don't need a minimum of 32Kbps to transmit voice only.

  • @polares8187
    @polares8187 2 роки тому

    faster details pleease

  • @uplink-on-yt
    @uplink-on-yt 2 роки тому

    Good thing all the implementations out there adhere to this, right? This pretty much relies on the cooperation of the network users. Luckily, I guess, if there’s only a small number of jerks on the network, things will continue to work.

  • @dp121273
    @dp121273 2 роки тому

    Don't look at my left ear the whole time 🤣

  • @AcornElectron
    @AcornElectron 2 роки тому

    My internet feels like it collapses daily

  • @andredejager3637
    @andredejager3637 2 роки тому

    love it 😀

  • @wahyu9420
    @wahyu9420 2 роки тому +2

    My internet speed right now isn't better than internet speed 40 years ago, suck.

  • @jca111
    @jca111 2 роки тому +3

    I watched at 125%

    • @GilesBathgate
      @GilesBathgate 2 роки тому +3

      He talks with an additive increase and multiplicative decreace in speed 😂

  • @three-card-dead
    @three-card-dead 2 роки тому

    really not happy with the quality of this video -- any chance we can get another pass on this?

  • @colonelhacker3661
    @colonelhacker3661 2 роки тому

    FECN and BECN.

  • @der.Schtefan
    @der.Schtefan 2 роки тому +1

    First time ever I used the 1.5x speed button on UA-cam. He speaks so sloooooooooooooooooooooooooooooooow and seems to micro-nap between sentences losing orientation. His bandwidth certainly is below 40bps.

  • @DumbledoreMcCracken
    @DumbledoreMcCracken 2 роки тому

    This is a low bandwidth presentation

  • @NickNorton
    @NickNorton 2 роки тому +2

    It's faster than rfc 1149

    • @idontknowreally
      @idontknowreally 2 роки тому +2

      Damn you, this is basically the rick roll of RFCs... 😁

    • @richardclegg8027
      @richardclegg8027 2 роки тому +6

      Without looking I guess "avian carriers"?

    • @nikkiofthevalley
      @nikkiofthevalley 2 роки тому +4

      @@richardclegg8027 Yep, avian carriers. I'm less of a networking guy, so I tend to use HTTP response code 418 - "I'm a teapot" for these sorts of jokes.

    • @kesslerrb
      @kesslerrb 2 роки тому +4

      Wasn’t there an extension to RFC1149 that strapped usb thumb drives to the pigeons? 🤣

    • @adfaklsdjf
      @adfaklsdjf 2 роки тому

      @@kesslerrb probably.. I heard someone looked into it and found you could achieve decent throughput with usb sticks, if you didn't care about latency.

  • @lenorkhide2873
    @lenorkhide2873 2 роки тому +1

    You can see that graph in action on steam during the initial release of an AAA game

  • @imamalox
    @imamalox 2 роки тому

    The person who edited this video could've maybe put in some effort to remove the annoying background noise 😅

  • @Yupppi
    @Yupppi 2 роки тому

    At this point I'm starting to feel like most of the data on internet transfer is all kinds of protocols, keys, encryption and notes about the package rather than the actual data.
    Bernoulli's law would imply that the packages would flow faster in the bottleneck!

    • @DFPercush
      @DFPercush 2 роки тому

      You should write an angry letter to Al Gore. :P
      Although if you think about the "bottleneck" being the fiber optic cables that carry hundreds of gigabits of bandwidth, it kind of does.

    • @adfaklsdjf
      @adfaklsdjf 2 роки тому

      You're not wrong about protocol overhead. It varies as a percentage of total traffic, with higher-data applications generally having a much lower protocol overhead as a % of total traffic... for watching a youtube video, it'll be a fraction of a percent.. for a short email, it's basically all overhead.

  • @hrford
    @hrford 2 роки тому

    I'd tell you a TCP joke, I'm sure you'll get it. Are you ready to hear it?

    • @kevintedder4202
      @kevintedder4202 2 роки тому

      I don't think anyone got it. Try sending it again, slowly. 😀

  • @Ur11
    @Ur11 2 роки тому

    "fork handles"🕯🕯🕯🕯?

  • @AcornElectron
    @AcornElectron 2 роки тому

    Yeah?

  • @romandobra3151
    @romandobra3151 2 роки тому

    What?

  • @sjatkins
    @sjatkins 2 роки тому

    40 bps? Wow. I got way better than that on FidoNet years earlier over phone modems.

  • @barebears289
    @barebears289 2 роки тому

    Who's here from Hussein Nasser's channel?

  • @sandordugalin8951
    @sandordugalin8951 2 роки тому

    Briliiant

  • @alexguillen2493
    @alexguillen2493 2 роки тому

    Syn Ack!!!!!

  • @leviath0n
    @leviath0n 2 роки тому

    Best at 1.25 speed.

  • @akashpawar9058
    @akashpawar9058 2 роки тому

    akash

  • @vicentelhs
    @vicentelhs 2 роки тому +1

    those laptops looks SUS

  • @skytech2501
    @skytech2501 2 роки тому +4

    wornderful vid, if you want to use your full brain bandwidth play the video at 1.75 :)

    • @AmanSharma-fr4uu
      @AmanSharma-fr4uu 2 роки тому +3

      Increase it exponentially and decrease when there's overflow

    • @DarthAthar
      @DarthAthar 2 роки тому

      why not directly 2?

    • @skytech2501
      @skytech2501 2 роки тому +4

      @@DarthAthar to avoid congestion

    • @allesarfint
      @allesarfint 2 роки тому +2

      Had to reduce it to 0.5, the buffer has not much space

  • @Schwuuuuup
    @Schwuuuuup 2 роки тому

    Not an uninteresting video but for my taste well too long winded.. basicly 16 min of pramble in a 20 minute video....

  • @skorp5677
    @skorp5677 2 роки тому +2

    1024th

  • @NoEgg4u
    @NoEgg4u 2 роки тому +2

    CNN is reporting that Al Gore wrote the paper on TCP/IP and congestion.

  • @domtorque
    @domtorque 2 роки тому

    25th

  • @tubeWyrme
    @tubeWyrme 2 роки тому

    This could have been better explained in 5 mins with no protocol details

    • @adfaklsdjf
      @adfaklsdjf 2 роки тому +1

      I think you're looking for another channel

  • @cubeboi7070
    @cubeboi7070 2 роки тому

    sixty ninth

  • @sam-you-is
    @sam-you-is 2 роки тому

    sixth

  • @jmshrv
    @jmshrv 2 роки тому

    Second

  • @MichaelKingsfordGray
    @MichaelKingsfordGray 2 роки тому

    A functionally illiterate Doctor working at a University!
    How standards have avalanched.
    Sad.

  • @anon3308
    @anon3308 2 роки тому

    Fifth

    • @eekee6034
      @eekee6034 2 роки тому

      @Yummy Spaghetti Noodles That's a result of using more than one server to handle comments. Tom Scott has a clear and concise video on why the number of likes goes up & down, (something like, "Why UA-cam lies about likes") and if you think about it, you can see how it applies to comments too.

  • @cybercurule8082
    @cybercurule8082 2 роки тому

    First