Why serverless is sometimes GOATED (removing 1,000 image backgrounds in seconds)

Поділитися
Вставка
  • Опубліковано 31 січ 2025

КОМЕНТАРІ • 88

  • @bmdavis419
    @bmdavis419 5 місяців тому +56

    The twitter beef that led to this was absolute cinema

    • @hbr3474
      @hbr3474 5 місяців тому +1

      What's the story?

  • @butwhothehellknows
    @butwhothehellknows 5 місяців тому +28

    Good job babe!!!! I love it!

  • @Skuiggly
    @Skuiggly 5 місяців тому +34

    Yacine shaking in his boots rn

    • @ashatron656
      @ashatron656 5 місяців тому

      Lol some quality twitter drama 😂

  • @kwibuske
    @kwibuske 5 місяців тому

    Love the perspective of giving a business perspective next to the technical perspective.

  • @benutza5362
    @benutza5362 5 місяців тому

    I was hoping you'd publish a video on this! Thank you for going this far in-depth, didn't do anything with AWS before sooo...

  • @clashgamers4072
    @clashgamers4072 5 місяців тому +13

    Tech Twitter is like a cult that likes to shit on serverless python etc like . Really appreciate you standing for what you believe in instead of following others.

  • @ItsD3vil
    @ItsD3vil 5 місяців тому

    You are the GOATED bro, Keep it up!!!!

  • @furycorp
    @furycorp 5 місяців тому +2

    A middleground between server and serverless is having a script to spin up an ec2 (or 10 or 100 or...) with user data and do whatever you need (even with an ASG if you need to autoscale -- oldschool AWS styles) and another script that turns it off when done. no need for monthly fees for spot work. so many people have this mental block that the second its a "server"/vps it can't be ephemeral but AWS is built around that concept

  • @Qrzychu92
    @Qrzychu92 5 місяців тому +1

    I would say a nice middle ground is a k8s cluster with autoscaling. You can have the same setup like with lambdas (SQS, S3), but write your message consumer to go into a docker pod. Set the scaling based on number of messages in a queue and voila - you get pretty much the same pros as with lambdas, while keeping the max cost in check.
    I also saw a talk where a company hit the limit of lambda parallelism, and their solution was.... a dedicated server and a queue :)

  • @t3dotgg
    @t3dotgg 5 місяців тому +13

    What’s your MRR though? Jokes aside this is a dope showcase of a really valuable workflow

    • @Schisek
      @Schisek 5 місяців тому +1

      I think for something like the Icongeneratorai thing this serverless approach is an easy calculation. Each generation costs a fixed amount of credits. That means the more you have to pay for your lambdas, the more revenue you made from your users

    • @ashundeyan8031
      @ashundeyan8031 5 місяців тому

      Theo watches web dev cody confirmed

    • @braincel05
      @braincel05 5 місяців тому

      Ah yes, the progenitor of it all. When is your vid on this coming?

    • @WebDevCody
      @WebDevCody  5 місяців тому +4

      not high enough, but I'm still youngish so I'll figure it out one day. does this comment mean another reaction is coming?!? or did you just watch it for fun

    • @pencilcheck
      @pencilcheck 5 місяців тому

      @@WebDevCody with theo you never know

  • @oSpam
    @oSpam 5 місяців тому

    Great video! I’m loving the AWS series. I’m a kind of AWS fanboy at the moment (also racking up some big bills) but I’m honestly loving it. Though having more aws services than customers is a little depressing xd
    Keep it up :)

  • @RayFernando1337
    @RayFernando1337 5 місяців тому +5

    Causally, “image of someone cookin.” 🔥 let him cook!

  • @EduarteBDO
    @EduarteBDO 5 місяців тому +8

    for this type of usecase serverless is pretty good. There's the fact too that lambda can scale down to 0 so you don't pay anything when that happens. If you rent a machine you would have to keep paying the rent contract. Even if you don't need that anymore.

    • @furycorp
      @furycorp 5 місяців тому

      You can "rent" an EC2 with an aws-cli command and "unrent" it just as easily.

  • @antidegenerates7449
    @antidegenerates7449 5 місяців тому

    Dude. AWS course. We need it!

  • @aymenbachiri-yh2hd
    @aymenbachiri-yh2hd 5 місяців тому

    Thank you so much for this infos

  • @patolorde
    @patolorde 5 місяців тому +1

    Its really cool to see this aws stuff

  • @rishabincloud
    @rishabincloud 5 місяців тому

    Interesting, thanks for sharing!

  • @ivbaklan
    @ivbaklan 5 місяців тому +2

    I wonder if you've tried doing it on your machine how long would take to process the files? Sometimes we don't need to make the task we have into a web service for others to use. But it's true that if it is something you plan on letting other people interact with, it is almost always cheaper and more efficient to do it yourself

  • @tanko.reactions176
    @tanko.reactions176 4 місяці тому +2

    boundless power = boundless expenses

  • @C4CH3S
    @C4CH3S 5 місяців тому +1

    This is what the BEAM vm was made for, like golang but for distributed systems.

  • @mustafasen4228
    @mustafasen4228 5 місяців тому

    I really like your videos bro. I also have a question, I wonder if it would make sense to use cloud-based systems for ERP projects of medium and small businesses in terms of budget and security. If there is someone who can help, I am ready to listen, I know that the audience of this channel is of high quality.

  • @sivtech
    @sivtech 5 місяців тому +2

    My VPS serves 3,000 users per month with no issues, but some things have problems when processing large data. I’ll definitely try serverless.

    • @oSpam
      @oSpam 5 місяців тому +1

      How much memory and cores does it use? Would you mind sharing the price you pay for it too? Thanks! Id love to know if serverless is a good move for you :)

  • @aigamemakerThai
    @aigamemakerThai 5 місяців тому

    Thanks for sharing

  • @marcialabrahantes3369
    @marcialabrahantes3369 4 місяці тому

    nit: SQS Queue is redundant btw ;P
    awesome video!

  • @mmzzzmeemee
    @mmzzzmeemee 5 місяців тому +1

    Cool
    I'm also curious how theo did his own take, guess i gotta wait

  • @jeremyAr
    @jeremyAr 5 місяців тому

    Loved the X drama 😂

  • @Ayoubased
    @Ayoubased 5 місяців тому

    lol I love Cody!

  • @sarabwt
    @sarabwt 5 місяців тому +6

    Well... it depends on the scale, but just renting 80 core server just for this seems like an overkill. If you rent that kind of server, you can pretty much run your entire infra on it and more. At least while you are getting 1.2M req/month :D

    • @WebDevCody
      @WebDevCody  5 місяців тому +2

      My point was unless you get a decent amount of cores, you will never get the same concurrency lambda provides out of the box. 32 core machine means a user is going to wait 156 seconds for it all to finish. Maybe that fine. Maybe it’s not. Now if 3 users do the action at the same time, someone is going to be queued up for a while.

    • @valley-artifact
      @valley-artifact 5 місяців тому

      @@WebDevCody isn't that parallelism, not concurrency?

    • @WebDevCody
      @WebDevCody  5 місяців тому

      @@valley-artifact ask gpt 😉

    • @nark4837
      @nark4837 4 місяці тому

      @@WebDevCody yes it's a good point, but 1.2M requests per month is ~30 requests per minute, for an 80 core machine, each request taking 6 seconds, you can run ~800 requests per minute, so that is overkill
      i understand you are suggesting for busty workloads, but still the dedicated server is provisioned for 25x the traffic than it'll likely get, which is very overkill, like fair enough overprovisioning by 5x or something (to account for spikes in traffic)

  • @tmanley1985
    @tmanley1985 5 місяців тому

    Lambda has its use case and for me, it's perfect for fanout situations such as this.

  • @inderjotsingh5868
    @inderjotsingh5868 5 місяців тому

    You didn't include the cost for lambda warm pool . Either the time will increase from 6 sec to 7 or 8 seconds or we have to keep a warm pool of lambda , which if i am not wrong will cost a lot more . But still awesome explanation and especially the visualisation part using convex was dope.

    • @WebDevCody
      @WebDevCody  5 місяців тому +1

      Yeah it adds a second extra time for billed duration, that’s a good point.

  • @WebDevCody
    @WebDevCody  5 місяців тому +1

    Btw I looked around some more, and a dedicated server with 24 cores 48 threads in many websites is like $400-600 a month. Idk where I got that 80 core pricing… maybe they were selling old cpus on herzner or ARM.

    • @oSpam
      @oSpam 5 місяців тому

      Yeah dedicated (or even ec2 instances) are really really expensive. Thanks for the insights, much appreciated!

    • @scrapycholo2659
      @scrapycholo2659 5 місяців тому

      Pricing is low because those servers are rusty and dusty😂

  • @DaveTheDeveloper
    @DaveTheDeveloper 5 місяців тому +2

    Interesting to see how js can scale. But see how a service and lambda is needed to scale up cause native js is not able to do so.
    Would be interesting to see if a dedicated server using golang with go routines could beat this without needing several lambdas and a message queue. Golang, for example, offers better hardware efficiency, so such a hardware might not be needed.

    • @rodjenihm
      @rodjenihm 5 місяців тому +4

      The point is that the tasks are CPU bound and they take a long time. Goroutines shine for I/O bound tasks. I don't think you would see much of a difference. When you have CPU bound tasks the only way to go faster is to add more cores and process tasks in parallel.

    • @WebDevCody
      @WebDevCody  5 місяців тому

      Exactly

    • @neociber24
      @neociber24 5 місяців тому

      You can also write lambdas in Golang or Rust

  • @nark4837
    @nark4837 4 місяці тому

    another thing is serverless can really easily be abused by your customers and increase your charges by a ton, but I guess then you can think about implementing ratelimiting in your backend server and so on, e.g., can only do background removal from 10 images per minute

    • @WebDevCody
      @WebDevCody  4 місяці тому

      right, you need rate limiting in place regardless of serverless or not, but doing a huge concurrently processed batch operation like this, you better have something in place before someone bankrupts you.

  • @jazzdestructor
    @jazzdestructor 5 місяців тому

    isnt there any way to limit lambdas used? like autoscaling groups? that could potentially stop you from going overboard

    • @WebDevCody
      @WebDevCody  5 місяців тому

      Yes, that have a way to limit the concurrency. So like throttling.

  • @bonk1463
    @bonk1463 5 місяців тому

    not defending the server setup, but would a queue library such as Bullmq solve most of the problem of retries and the need for a load balancer?

    • @WebDevCody
      @WebDevCody  5 місяців тому

      Bullmq, rabbitmq, redis queues. Yes all of these could help. Honestly you just need an in memory queue inside your running service, aka semaphore type of thing.

  • @adityakhare2935
    @adityakhare2935 5 місяців тому

    But doesn't you run into timeouts with this approach which i believe is 3 seconds by default. So if you need more processing power, say for processing videos or just have to process multiple images on a single lambda invocation, you either have to increase timeout and memory for the lambda or just So use a dedicated server. I still believe if you need more compute, it is better to use a beefy VPS

    • @WebDevCody
      @WebDevCody  5 місяців тому +1

      Lambda can go up to 15 min processing, 10gb memory and 6vcpu. That’s pretty beefy, and you can distribute your jobs to multiple lambdas for concurrency.

    • @adityakhare2935
      @adityakhare2935 5 місяців тому

      @@WebDevCody yeah, that could work

  • @laurentperroteau632
    @laurentperroteau632 5 місяців тому

    it's just that I started watching the video without understanding the word GOATED (I thought it was negative) and after a while I had to ask google because it was rather positive 😁

  • @rayondaitel
    @rayondaitel 5 місяців тому

    are you using api service to remove the background or using an ai model directly from your backend to do it?

    • @WebDevCody
      @WebDevCody  5 місяців тому

      I used a docker container which runs python and a machine learning model over the image.

  • @Oussama-uo1gf
    @Oussama-uo1gf 5 місяців тому

    so many people r scared of serverless bcs of huge unexpected bills, but as you showed it can be way better for specific use cases like this. Just have to know what you're doing tbh and there's also safety thresholds

    • @WebDevCody
      @WebDevCody  5 місяців тому +1

      yeah, just treat every serverless function as another attack vector and make sure you don't have run away executions or open it up to ddos

  • @astronautonmars
    @astronautonmars 5 місяців тому

    Impressive!!!!!!

  • @magnuserikkeenrobot1117
    @magnuserikkeenrobot1117 5 місяців тому

    awesome video, could i see the code?

  • @CodingEnjoyer
    @CodingEnjoyer 5 місяців тому

    Extra interesting

  • @kengreeff
    @kengreeff 5 місяців тому

    You could also run workers servers that scale up on demand. I would rather timeout than get that serverless bill if it went viral

  • @adomicarts
    @adomicarts 5 місяців тому

    Nice

  • @satyak1337
    @satyak1337 5 місяців тому

    Isn't this a common use case for lambda?? What is Twitter controversy ?

    • @WebDevCody
      @WebDevCody  5 місяців тому

      that's a good question; people have been so anti-serverless pilled they think a dedicated machine is the answer to everything

  • @k98killer
    @k98killer 5 місяців тому

    "SQS queue" sounds weird every time I say it.

  • @vinialves12362
    @vinialves12362 5 місяців тому

    you're not human

  • @simonhartley9158
    @simonhartley9158 5 місяців тому

    I'm not saying that you don't have a point and that serverless doesn't have it's place, but other auto-scaling options exist that don't require an immediate jump to an 80 core instance.

    • @WebDevCody
      @WebDevCody  5 місяців тому

      That is true, you could use ec2 and scale up using queue depth, and have your ec2 instances consume from a queue. At that point again I argue, how much money are you actually saving? You’d need a lot of ec2 instances with a lot of cores to reach the same level of concurrency. It’s all trade offs.

    • @simonhartley9158
      @simonhartley9158 5 місяців тому +1

      @@WebDevCody it will depend on how bursty your workload is. If those instances are only coming online once a day for a short time, perhaps it's worth it.

  • @ac130kz
    @ac130kz 5 місяців тому

    sadly, lambda introduces a lot of dependency complexity and a vendor lock in. I wish there was an API similar to how S3 isn't tied to AWS

    • @ac130kz
      @ac130kz 5 місяців тому

      it made me think about why not just run autoscaling groups

    • @neociber24
      @neociber24 5 місяців тому

      It's all about trade-offs.
      I can argue you can architech your code in a way you have a ImageResizerService and ImageUploaderService that are inject to your lambda, but still is adding complexity.
      But how much money should you be losing to consider migrating to other Cloud? Migrations are always a pain

    • @WebDevCody
      @WebDevCody  5 місяців тому +1

      I wouldn't say the dependency is too high.
      If you structure your code correctly, you could easily refactor away to use something else. For eg., I'm just deploying a docker container to lambda and the docker container could be ran anywhere. All I'd have to do is just change a little bit of the code of where the events are coming in. I could consume them from Redis, I can consume them from RabbitMQ, Kafka, etc. and then do my normal logic in my code.
      Using an auto-scaling group with EC2 instances is also an option, or just using Fargate with a container host could be something that will do the same thing. It just wouldn't scale up as fast from what I understand as lambda.

    • @furycorp
      @furycorp 5 місяців тому

      not really I tend to use lambda through an adapter so anything on the other side (e.g. hono, astro, etc) its same old nodejs -- you can have a lambda entry point that wraps the adapter and a regular node entrypoint and both import a common handler

  • @butwhothehellknows
    @butwhothehellknows 5 місяців тому +6

    First!!!!

  • @jeremyAr
    @jeremyAr 5 місяців тому +1

    its base64 😭

  • @xoutaku7600
    @xoutaku7600 5 місяців тому

    i'd rather not be broke

    • @WebDevCody
      @WebDevCody  5 місяців тому +3

      1,500,000 images until you hit the same price of an 80 core $224 vps. Weigh out business needs.