I Finally Understand Load Balancing

Поділитися
Вставка
  • Опубліковано 13 кві 2024
  • Load balancing is key to keeping your services up. It's also not as simple as you may think. Sam did an INSANE job visualizing load balancing and you should definitely check the post out to play with it yourself
    samwho.dev/load-balancing/
    FOLLOW SAM / samwhoo
    Check out my Twitch, Twitter, Discord more at t3.gg
    S/O Ph4se0n3 for the awesome edit 🙏
  • Наука та технологія

КОМЕНТАРІ • 127

  • @t3dotgg
    @t3dotgg  Місяць тому +24

    Sam's the coolest and deserves a follow. If he gets popular enough he can dedicate more time to making dope blog posts like this
    twitter.com/samwhoo

    • @mpty2022
      @mpty2022 Місяць тому

      excellent work of animation, design and understanding. I just want to add a small point here. this is just plain scheduling (decades old) and load balancing is one application of scheduling. The topic of scheduling itself has been well researched in CS research community. The actual researchers here are to be praised more.

    • @samrosewho
      @samrosewho Місяць тому +1

      @@mpty2022for sure! I want to write more broadly about the topic of scheduling soon, there’s a tonne of literature to absorb first though 😅

    • @mpty2022
      @mpty2022 Місяць тому

      @@samrosewho oh man you did a great work, i feel like an a** now

    • @samrosewho
      @samrosewho Місяць тому +1

      @@mpty2022 it’s alright, we’re all standing on the shoulders of giants. Important to remember that 😄

  • @samrosewho
    @samrosewho Місяць тому +305

    Oh hey I wondered why I got a bunch of new followers on Twitter 😅

    • @lyreshechter1812
      @lyreshechter1812 Місяць тому +6

      Great article! Thanks!

    • @jasonaables
      @jasonaables Місяць тому +2

      Really well done. I'm a visual learner so this kind of style is very helpful.

    • @EvertvanBrussel
      @EvertvanBrussel Місяць тому +2

      Hey I have a question about your post. Because you mentioned that in terms of dropped requests, PEWMA starts better than LC, but that eventually it starts performing worse. But I don't really understand why that would be the case. As in, I understand why that's the case for how you explained PEWMA works, but the fix seems (to me at least), trivially simple.
      You start with PEWMA, but the load balancer also knows which servers have their queues maxed out, so once you know that all your fast servers have their queues maxed out, you start using the slowest servers again, effectively falling back to the LC algorithm. And of course, then once traffic slows down a bit and your servers have some breathing room again, you switch back to the pure PEWMA algorithm.
      Isn't it that easy? Am I missing something here?
      Edit: oh finally, I absolutely loved all the animations and especially the playground at the end where you could tinker with the parameters. I was wondering if it would be not too much to ask, if you could add to the playground an automatically updating graph showing the 95% percentile of latency of the last 60 seconds of requests and / or the number of dropped requests of the last 60 seconds. Because I noticed that once you push the algorithms close enough to their limits, it's actually quite hard to get an accurate feel of their performance simply by eyeballing it.

    • @NicholasMaietta
      @NicholasMaietta Місяць тому +1

      I've been building servers and full stack apps and always hated the issues I ran into with using Load Balancers. I appreciate the very good breakdown of this. This is now part of my reference material to share with others.

    • @samrosewho
      @samrosewho Місяць тому +3

      @@NicholasMaietta really glad you enjoyed it! ❤️

  • @welcometovalhalla2884
    @welcometovalhalla2884 Місяць тому +75

    Just hire the Factorio players smh

  • @4.0.4
    @4.0.4 Місяць тому +38

    These algorithms were initially invented by the late John McAfee to handle his harem of side girls. This is why they're called "load requests".

  • @fdsafdsafdsafdsafd
    @fdsafdsafdsafdsafd Місяць тому +82

    Nice to see devs appreciating ops instead of just assuming "it works".

    • @samrosewho
      @samrosewho Місяць тому +15

      Ops/infra has some of the coolest computer science, but also lots of coverage of it is quite dry and intimidating. I’m having a blast trying to bring the ideas to life and make them less scary!

    • @buddy.abc123
      @buddy.abc123 Місяць тому +1

      All devs appreciate ops, but javascript devs.

    • @AMalevolentCreation
      @AMalevolentCreation Місяць тому

      @@buddy.abc123very accurate

    • @Aoredon
      @Aoredon Місяць тому

      Every dev appreciates ops. You must not be a dev.

  • @jst1977
    @jst1977 Місяць тому +47

    Fun fact, exponentially weighted moving average is exactly the same thing as RC low pass filter in audio.
    Also, I don't think that the algorithm has to keep track of last N values, since the math of it allows using previous moving average and current value only. That way the algorithm is O(1) and only requires a few CPU cycles (I think less than 20 per server for the computation itself)
    edit: specify which low pass

    • @samrosewho
      @samrosewho Місяць тому +4

      Oh damn… I didn’t realise this! Makes total sense though, and now I feel silly 😅

    • @roycrippen9617
      @roycrippen9617 Місяць тому

      Is that because as you move into higher and higher frequencies the number of samples to represent the signal decreases? If I remember right the simplest fir low-pass is just the average of n and n-1. So as the number of samples per frequency decreases the difference between the amplitudes of adjacent samples becomes much more significant. I never thought about it as an exponentially weighted moving average though, probably because I'm a dumb novice programmer and huge audio nerd lol

    • @jst1977
      @jst1977 Місяць тому +6

      @@roycrippen9617
      One more fun fact about EMA: it is also used in LLM training.
      I don't think you're dumb, there is just a crap ton of knowledge in this field.
      EMA (exponential moving average) is an IIR filter that can be derived by discretizing resistor capacitor (RC) analog low pass filter.
      The analogy that clicked for me was that the original filter pulls the moving average towards itself with a rubber band. Low frequencies have enough time to change the position of the average. While high frequencies kind of just wiggle it a little because they change too fast to attract the average. This analogy is also reasonably mathematically accurate.

  • @BosonCollider
    @BosonCollider Місяць тому +8

    One benefit of rewriting your server in Rust or optimized Go is that you usually don't need a load balancer until the point where you need to distribute your database. Unless the load on your server is inherent to what it is doing, I really dislike the idea of having multiple servers that all access a shared database, especially if this means that requests from the same client can get reordered.
    Given how good server hardware has gotten lately, diagonal scaling + region sharding is just a much better path than horizontal scaling imho. With that said, PEWMA's ability to favour single servers is neat if you need to autoscale the number of pods.

  • @xjtxjgsmnkccswlcdelqmms
    @xjtxjgsmnkccswlcdelqmms Місяць тому +7

    backend guys: how the hell do I choose what to use?
    frontend guys: how the hell did he made these animation?

  • @khepin
    @khepin Місяць тому +7

    If you're interested in load balancing, there's a fantastic talk by the CTO of Fastly on the topic. He shows that "random" is already a great algorithm that's hard to beat and shows some methods that perform better. I think "get 2 at random then pick the fastest" is one of the best algos they use. Talk is a bit old, so things may have evolved since too.

    • @doc8527
      @doc8527 Місяць тому +3

      Many technical term like load balancing is not a "simple" topic once we start to think about how they can work.
      That's why I hate the modern interview process for non-senior above involves system design questions (that suggests and encourage the interviewees to use those terms as an unspoken rule), where random junior/mid devs keep using those term to randomly brag about we can horizontally scale our system to handle billions requests in random design for non-sense cases without tradeoff and penalty. Everyone pretends they know that during the interview but in fact most (includes the interviewers) don't without a long research.
      In reality, often the person like me (not try to brag I'm good dev at all, just average more pragmatic one) who can't pass those random questions, will have to implement the similar stuffs in real servers. Those who brags they know it during the interview? All their theoretical assumption and imagination without the true past experiences literally failed at the very first step.

  • @alexlowe2054
    @alexlowe2054 Місяць тому +15

    One of the more complex problems I've dealt with when load balancing is that servers caching requests can dramatically change the speed of requests when load balancing. The load balancer can't possibly know which servers have things cached, which makes real world load balancing much more difficult. Often, your servers have enough available resources that round robin is plenty good enough.
    That, and when things go badly, the back end server might not have the data the user is requesting. Many developers design a webservice on their local machine, without thinking about how that same code will function in an environment where requests can go to multiple servers. I've seen some applications that simply don't work if a user gets their requests routed to a server that wasn't the same server that's handled previous requests. Obviously, that's not good. Most of the boring corporate environments I've worked in have set all session requests from a specific user to route to the same server, which I'd bet is a far worse load balancing strategy than any of these other options. In the worst case, those applications effectively only have a single server.
    API design is the first critical step of building a good load balanced service. If you don't have an architecture that handles requests going to whatever server is available, then you can't even start to load balance requests. Ideally, your cache layer would be a separate physical device, so your cache layer can benefit all your servers, not just one server that handled a specific request. Last time I looked into this, Stack Overflow runs their entire server infrastructure on a dozen servers, because 90% of their network requests are read-only, and can be cached at the content delivery layer, removing the majority of load from their service. That's the sort of easy win that will massively improve performance.
    I've seen problems with API design cause far more problems for the medium sized websites I've worked on. Which is one of the advantages of round robin balancing. With just two servers, you force your developers to deal with handling problems relating to caching, data concurrency, and request concurrency. If your nonprod environment has round robin load balancing, your developers are going to run into any "works on my machine" problems long before those problems appear in production. The other problem I've seen is that most businesses dramatically over-spec their servers, because they believe that dropping any requests is a "production incident". None of these load balancing techniques matter when your servers are constantly below 15% utilization. Modern multicore machines can handle an insane number of concurrent requests, which means you're really only having to think hard about load balancing at hyper-scale websites that serve millions of requests every second. Below that, things like caching and server architecture matter a lot more to performance.
    But probably the biggest problem I've seen is that businesses that are large enough to need proper load balancing are usually large enough that you can't just log into a server and change your load balancing options. That creates a huge opportunity cost to changing the defaults. People need to understand what the options are, and what the changes mean. I've worked in too many environments where pushing for a small improvement like this would require a ton of meetings, which usually means these types of settings changes never get made. And the smaller businesses are usually too busy working on mission critical things, so they'll just throw more infrastructure at the problem to avoid wasting time thinking about it.

    • @samrosewho
      @samrosewho Місяць тому +3

      Request variance really is a pain, I try to outline strategies that try to account for it in the post. Caching is pretty extreme, though, and if you can you definitely want to do it before making any load balancing decisions (I haven’t often seen this done in practice, though).
      Session-sticky load balancing is fairly common but something you should really try to avoid if you can. 😅
      I’d love to be a fly on the wall of any meeting where people are arguing to change load balancing params and corporate executives are shown excepts from my blog post 😁

    • @wassafshahzad8618
      @wassafshahzad8618 Місяць тому

      I think you have a great article on your hand. if you can write one up on how you solved your problem it would be a great read

    • @wassafshahzad8618
      @wassafshahzad8618 Місяць тому +1

      @@samrosewho Great article man, One of the best read ever.

    • @InterFelix
      @InterFelix Місяць тому +2

      If getting budget for more servers is easier than getting a load balancing algorithm change approved, you know you've made it to corporate hell.

  • @BastianInukChristensen
    @BastianInukChristensen Місяць тому +3

    Sweet! I needed to know which load balancer to use for my side project with 0 users, thanks for sharing!!

  • @Ray-gs7dd
    @Ray-gs7dd Місяць тому +10

    17:35 more backend stuff would be awesome. Thanks for the video :)

  • @weatherwaxusefullhints2939
    @weatherwaxusefullhints2939 Місяць тому +6

    Sometimes I ask myself what I'm looking for on UA-cam. This is the answer.

  • @miscbits
    @miscbits Місяць тому +3

    This article could have played 5d chess making you refresh to fully appreciate it and also reload some ads

    • @samrosewho
      @samrosewho Місяць тому +1

      Where are you seeing ads on the post?

    • @miscbits
      @miscbits Місяць тому +1

      @@samrosewho I’m not, I just thought the concept of doing that would be funny. I’ll edit my comment cause I think it wasn’t clear that this was a joke

    • @samrosewho
      @samrosewho Місяць тому +3

      @@miscbits you had me worried 😅 thought they may have slipped in somehow. Thanks for the edit!

  • @db_2112
    @db_2112 Місяць тому +2

    Really impressed he actually coded the examples!

  • @Pscribbled
    @Pscribbled Місяць тому +3

    There’s a reason why round robin is still the standard - it optimizes for availability first.
    Imagine if one of your servers fails for some reason and just starts to immediately drop requests, in this scenario, while the server still has not been health checked, both the dynamic weighted round robin and the least connections algorithm will send all of the requests to the downed server. This is called black holing.
    I’m not familiar with the PEWMA algorithm but it looks like it’d fall victim to the same issues.
    With respect to the standard weighted round robin, generally you try to have homogeneous fleets. This makes it more predictable how your hosts will behave and easier to extrapolate data from load tests and steady state performance.
    At a high enough RPS, the cost of your requests will generally become more or less homogeneous - you can often make an assumption of even distribution (unless you’re using L4 load balancing)
    Given the two statements above is true, there no real reason to have weighted round robin on your load balancing.

    • @samrosewho
      @samrosewho Місяць тому

      There are strategies to avoid the kind of behaviours you’re talking about, but none of them do as well as round robin at minimising loss. I wouldn’t go as far as saying that it optimises for availability, but it does a good job at avoiding pathological behaviour. It’s also wonderfully simple and good enough for 99% of use-cases 👌

  • @urisinger3412
    @urisinger3412 Місяць тому +9

    i can handle any load

  • @jonasosterberg7517
    @jonasosterberg7517 Місяць тому +2

    Round Robin is always my choice, because of predictability of load distribution. Fast doesn't always mean correct. Often a 404 or a 500 is faster than a 200.

    • @samrosewho
      @samrosewho Місяць тому +1

      I got a quite a few people pointing out to me that failures are faster than successes, and I should have pointed that out in the post. I did think about it at the time, but in practice these algorithms already account for that and I didn’t think it helped me achieve the goal of the post 😅
      Round robin is a solid choice, and works fine up to much larger scale than most companies will ever achieve.

    • @jonny555333
      @jonny555333 Місяць тому +2

      True but that's kinda not what the article showed. It showed that there are algorithms that are both faster and drop connections less than Round Robin.

  • @NateThompsontheGreat
    @NateThompsontheGreat Місяць тому +1

    Sam killed it on that post, and I appreciate your commentary. The visuals alone bring amazing clarity to an often misunderstood staple technology. Kudos Sam!

    • @samrosewho
      @samrosewho Місяць тому

      Said thank you on Twitter but will say it here too: thank you 🙏

  • @BluntsNBeatz
    @BluntsNBeatz Місяць тому +1

    This is an easy to understand hypothetical. Now if only I knew how to actually, practically get started implementing load balancing in real scenarios.

    • @InterFelix
      @InterFelix Місяць тому

      The first step is getting your application ready for horizontal scaling.
      Does it tolerate reordered requests?
      Do you also need to scale your database?
      If so, how?
      How do you handle race conditions?
      There's probably a metric fuckton of additional important aspects I've omitted here (not a dev, I'm an ops guy).
      If your application is ready, you can think about implementing load balancing.
      There's great FOSS load balancing software available, most notably HAproxy and nginx.
      Both work great and have their own idiosyncrasies, but you're probably already familiar with nginx because it's also a great webserver, so you can stick with that one.
      This is where you can set the load balancing algorithm.
      Your load balancer needs to be able to handle all traffic coming to your site, but it's task is not very computationally expensive, so it will be able to handle much more throughput at a given hardware spec than your webservers.
      So spec generously, but evaluate your production metrics, so you're not wasting money on a completely overspeced load balancer.
      You'll also probably want to make your load balancer redundant, so add a second one with floating IP failover (through keepalived for example). This way your load balancer is redundant, but you're also wasting money on a server sitting idle all of the time.
      If you need to scale your load balancer horizontally, you can always add more pairs and take advantage of DNS round robin to split traffic between them.
      Alternatively, you can of course buy load balancing as a service through your hyperscaler of choice, or Cloudflare, or whatever.
      You can also buy load balancer appliances from vendors like Kemp, a lot of enterprise firewall appliances also have load balancer functionality built in. There's also ready-made software solutions available in case you don't want to build the nginx / HAproxy setup yourself and just want a nice setup process with a fancy GUI.

  • @smanqele
    @smanqele Місяць тому +1

    Conclusion: Stay away from figuring out LB. Adopt a solution and just pray!

  • @dimitriborgers9800
    @dimitriborgers9800 28 днів тому

    Noob question, is the request queue something that comes out of the box for a server or is that something like rabbitMQ?

  • @JobStoit
    @JobStoit Місяць тому +2

    That is a really really really good article. That's art! 👏👏

  • @Pekz00r
    @Pekz00r Місяць тому

    Great article and great video! Great job to both Theo and Sam.
    More backend topics would be great!

  • @ChristopherCricketWallace
    @ChristopherCricketWallace Місяць тому +1

    If only I could get to that level.... Well, done Sam. BRAVO!!!!

  • @the-real-random-person
    @the-real-random-person Місяць тому

    Impressive video, I learnt a lot from it :) thanks man continue like that ❤

  • @kingnick6260
    @kingnick6260 Місяць тому +1

    This article was written with plenty of love

    • @samrosewho
      @samrosewho Місяць тому

      I’m glad it shows. My in-progress post is probably the most love I’ve poured into any of them so far. Should be out some time in the next month.

  • @RemotHuman
    @RemotHuman 27 днів тому

    this is more interesting than sorting algorithms

  • @asaurcefulofsecrets
    @asaurcefulofsecrets 7 днів тому

    This assumes that all request events are independent. In real life they rarely are. For processors operating on cacheable data, sticky sessions help hit the cache more often. I am surprised that is not even mentioned; It's queue theory 101. Like page 1, paragraph 1: "Let's assume independent events, exponentially spaced in time, also called Poisson distributed traffic, blahblahblah". OK, cool. But what if they are not? The balancer does not know the events are related, *but it knows the source*. If they come from the same source and close enough in time it can assume they are related. Then send them to the same processor, which will take less time to serve them overall, because it only has to retrieve the required data once. This is not too complex of a strategy. Coupled with a simple round robin, weighted or not, usually gives better results than any of the strategies described here (I don't think I need to remind anyone that IO is almost always the latency killer). Of course any real balancer also limits the number of concurrent connections per processor and tracks timeouts to determine healthiness, effectively achieving some of the effects of the weighted RR and min connections techniques described here, at the cost of some drops. On top of that it may also support periodically probing a health endpoint on each processor, with specific settings regarding minimum response time to decide on processor healthiness based on that out of band predetermined request only.

  • @damonguzman
    @damonguzman Місяць тому +12

    When did this channel become,”Read along Blogs with Theo?”

    • @SharunKumar
      @SharunKumar Місяць тому +1

      I wouldn't know about these posts if not for this channel 🤷🏻‍♂️

  • @tobiasfedder1390
    @tobiasfedder1390 Місяць тому

    That is a great blog post. Also, I'd love to know more about load balancers, especially how it works with multiple active load balancers. Same IPs for multiple servers, heartbeats and so forth, I tried to read about it but I just can not grasp it.

  • @balaclava351
    @balaclava351 Місяць тому +1

    Soon we're gonna need load balancers for the load balancers. Xzibit meme anyone?

    • @samrosewho
      @samrosewho Місяць тому +2

      Multi-tier load balancing is quite common in practice! The big companies need quite a few levels, at different layers in the stack, to achieve their scale.

  • @sumshitteinnit8484
    @sumshitteinnit8484 Місяць тому +1

    You're masking a lisp?? You're doing it extremely well in that case

  • @Malix_off
    @Malix_off Місяць тому +2

    Handle any hard load with another guest

  • @dandigangi_em
    @dandigangi_em Місяць тому

    Just discovered that GCP has a central region. AWS needs to get it!

  • @poweron3654
    @poweron3654 Місяць тому +4

    Load balancing when the load balancer goes down or you have to do session pinning 😭😭

  • @ErazerPT
    @ErazerPT Місяць тому

    Guess the next step up would be the LB actually being able to figure which requests "naturally" take longer, as not all requests are equal a priori. This would then prioritize sending the most expensive request to the best available server. Can't prove it of the bat, but pretty sure that on a fully loaded system with a somewhat even mix of requests this would pretty much distribute everything to the point where any request would be close to the "average response time" line.

    • @samrosewho
      @samrosewho Місяць тому

      A really tricky problem in practice, to the point where I’ve never seen it done. I very nearly didn’t cover weighted round robin because it’s so impractical to rely on humans to judge the cost of anything.
      Closest I’ve seen is splitting your API endpoints out into groups of “slow”, “medium” and “fast” and treating those buckets differently.

    • @ErazerPT
      @ErazerPT Місяць тому

      @@samrosewho I was thinking about the API too when i wrote it. But, as you said, "impractical to rely on humans", and we now have something that is sort of good at making predictions, it just needs a lot of data, and well... this is precisely the kind of data we can synthetically generate ;)

  • @precumming
    @precumming Місяць тому

    As a backend person I'm definitely up for you doing more backend and ops stuff, you seem to be really good at finding things like this

  • @CubaneMusic
    @CubaneMusic Місяць тому

    Can you overload your load balancer or is that not really a concern?

  • @TheTmLev
    @TheTmLev Місяць тому

    Consistent hashing sometimes matters much more than any other load balancing algorithm

    • @samrosewho
      @samrosewho Місяць тому

      I have a post about consistent hashing that’s in draft. I haven’t really seen it used in the load balancing space, it’s usually put to work in data sharding from what I’ve seen.

    • @TheTmLev
      @TheTmLev Місяць тому

      @@samrosewho hey Sam! It was used extensively in a paper about Maglev, Google's distributed load balancer: static.googleusercontent.com/media/research.google.com/en//pubs/archive/44824.pdf

    • @TheTmLev
      @TheTmLev Місяць тому

      @@samrosewho hey Sam! It was used extensively in Maglev, Google's distributed load balancer. Can't link the paper, unfortunately, since UA-cam deletes comments with URLs for some reason, but it should easy to find. Search query: "A Fast and Reliable Software Network Load Balancer"

  • @YuriBez2023
    @YuriBez2023 Місяць тому +1

    "Called called" bug patched and deployed.

  • @Wielorybkek
    @Wielorybkek Місяць тому

    that was so cool!

  • @johnnygri99
    @johnnygri99 Місяць тому

    We must make Sam explain all the things.

    • @samrosewho
      @samrosewho Місяць тому +1

      The mitochondria is the powerhouse of the cell.

    • @johnnygri99
      @johnnygri99 Місяць тому

      @@samrosewho 🤯

  •  Місяць тому

    Obnoxiously easy to understand! Have no other words other than PERFECT! ❤❤❤❤🎉🎉🎉

    • @samrosewho
      @samrosewho Місяць тому +1

      You flatter me! ❤

  • @benschmaltz5789
    @benschmaltz5789 Місяць тому

    More backend brother. Full stack programmers, we ride at dawn

  • @AveN7ers
    @AveN7ers Місяць тому +1

    How many times did you change the title of this video? 🤣🤣

  • @rickdg
    @rickdg Місяць тому +1

    If your servers are stateless (PHP says hi), you can just scale horizontally as needed and the load balancer can just default round robin as the servers are identical.

    • @samrosewho
      @samrosewho Місяць тому +4

      I talk about this in the post, but be careful assuming your servers are identical! Odds are they aren’t, especially if you’re using VPSs in a cloud provider. Even machines in the same instance class can vary. One of the things I’d love people to take away from this post is that with minimal effort (usually 1-2 lines of nginx config or whatever) you can do quite a bit better than round robin.

    • @rickdg
      @rickdg Місяць тому

      @@samrosewho Thanks for the reply. Have you noticed considerable differences in spawning identical VPSs? Depending on your stack, each server can be really simple. Usually, the bottleneck is then the database server, which is a whole different story.

    • @samrosewho
      @samrosewho Місяць тому

      @@rickdg I’ve seen non-trivial differences in servers using exactly the same hardware, it can be pretty wild.

    • @rodjenihm
      @rodjenihm Місяць тому +1

      Shared VPS with the same specs can vary a lot. Only if you have dedicated servers you can "bet" that they are equally powerful. But dedicated ones are way more expensive.

    • @asaurcefulofsecrets
      @asaurcefulofsecrets 3 дні тому

      The requests are not, even for the same endpoint/entity. Example: list items on account X is light with 0 items. I add 100 items, then it is not. I re-query immediately on the same instance and it is light again, because it is cached.

  • @gro967
    @gro967 Місяць тому

    How is upload thing even a product? This is the kind of project we did on a weekend in university, everyone with more than 2 days in IT can easily build it in no time...

  • @shadyworld1
    @shadyworld1 Місяць тому

    WOW

  • @John_Versus
    @John_Versus Місяць тому +1

    Typos like "called called" tells that it was actually done by human. 😄

    • @samrosewho
      @samrosewho Місяць тому

      I was so annoyed when it was pointed out. I re-read this post at least a dozen times 😅

  • @PureGlide
    @PureGlide Місяць тому

    It would be ironic if Sam's website was overloaded. Opportunity missed haha

    • @samrosewho
      @samrosewho Місяць тому

      It’s a static site on GitHub pages. If it ever got overloaded it’d be an excellent day. 😁

  • @SandraWantsCoke
    @SandraWantsCoke Місяць тому +1

    That article is tits!

  • @the-real-random-person
    @the-real-random-person Місяць тому +1

    Am I so early 😅 still gotta watch the vid lol

  • @edumorangobolcombr
    @edumorangobolcombr Місяць тому +2

    Hello night owls

    • @jmatya
      @jmatya Місяць тому +1

      Hello night people from the US. You know timezones. People also watch him from Europe 😉

    • @penewoldahh
      @penewoldahh Місяць тому

      hello night person from the US

  • @INDABRIT
    @INDABRIT Місяць тому

    Rip any red/green colorblind viewers

    • @samrosewho
      @samrosewho Місяць тому +4

      I knowwwww, I have been putting effort into fixing this in subsequent posts. Sorry if this is something that made the post difficult for you. It’s not that I don’t care, it’s that I’m not very good at this yet 🙈

    • @INDABRIT
      @INDABRIT Місяць тому

      It was actually an awesome explanation, with really nice visuals. I wouldn't know a better way to show the difference without colors either. Amd I'm not a color expert to pick new colors, I just know red/green is a pretty common colorblindness (if thats a word?)

    • @samrosewho
      @samrosewho Місяць тому +2

      It is! Chrome has tools for emulating different types of colourblindness that I didn’t know about at the time, but I use on all my posts now. I’m starting to branch out into using different shapes and patterns as well, so as not to rely solely on colour 😁

  • @samuelgunter
    @samuelgunter Місяць тому

    sam who?

  • @VicioGaming
    @VicioGaming Місяць тому +2

    so you're again reacting to stuff other people made without really providing anything new?
    what a suprise

    • @samrosewho
      @samrosewho Місяць тому +5

      I wrote the post and I’m ecstatic Theo used his reach to introduce more people to my work 🙂

    • @VicioGaming
      @VicioGaming Місяць тому +2

      Sure, but he still didn't provide anywhere near enough value for this video to be justified.
      Majority of this video is just reading word for word what you wrote.
      He could've went out and gathered more blog posts/other materials like yours and just show us parts of those and provide links in description.
      But that's too much work, that's why this video is what it is

    • @nickmoore5105
      @nickmoore5105 Місяць тому

      @@VicioGaming and yet you’ve watched it and you are commenting on it

    • @VicioGaming
      @VicioGaming Місяць тому

      @@nickmoore5105 As if it changes anything.
      I've watched it to the end to see if Theo actually brings value with this video and, what a suprise, he doesn't

  • @abdelmananabdelrahman4099
    @abdelmananabdelrahman4099 Місяць тому +1

    #ad