System Design Interview: Design a Web Crawler w/ a Ex-Meta Staff Engineer

Поділитися
Вставка
  • Опубліковано 1 жов 2024

КОМЕНТАРІ • 170

  • @anchalsharma0843
    @anchalsharma0843 2 місяці тому +27

    I have been watching too many system design videos, and most of them throw boxes and tools at the canvas just for the sake of it. But your videos follow an interesting and pragmatic approach that someone could actually use to design a real system. Above all, I truly appreciate the framework that you are infusing in viewers mind to tackle problems. Thanks for your efforts 🚀

  • @vigneshraghuraman
    @vigneshraghuraman 3 місяці тому +17

    by far the best System design interview content I've come across - please continue making these. you are doing an invaluable service!

  • @crackITTechieTalks
    @crackITTechieTalks 3 місяці тому +9

    I often don't comment for the videos. But couldn't stop commenting your video just to say "What a valuable content". Thanks a lot for all your videos!! Keep doing this..

  • @_launch_it_
    @_launch_it_ 3 місяці тому +29

    I had an interview last Friday (June 14) and I followed your exact steps. The question was to design the Ticketmaster. The Redis cache solution was the best. Thank you for these amazing videos

  • @Global_nomad_diaries
    @Global_nomad_diaries 3 місяці тому +5

    Soo soo soo much thankful I am for all this content.

  • @aaa-hw2ty
    @aaa-hw2ty 2 місяці тому +2

    400gbps nic😂

    • @YoussifSalama
      @YoussifSalama 11 днів тому

      I think he was off by a couple orders of magnitude there 😅

  • @jiananstrackjourney370
    @jiananstrackjourney370 Місяць тому +1

    Great video! I have a question, is 5k requests per second realistic? Even with the most powerful machine on EC2?

  • @IntSolver
    @IntSolver 2 місяці тому +2

    Hey, thanks for your video. I have watched all your content and I gained immense amount of knowledge.
    I gave my E4 interview a week back, and my question was this (with a slight variation of the crawling being done through an app which was deployed in 10k devices).
    I covered all the content which you've presented here in the same structure, and was able to dive deep into all the parts the interviewer asked.
    I was expecting an offer but got rejected due to "No Hire" in Design round. After retrospection, I could find some people talking about chord algorithm and peer2peer crawler was expected. I still don't understand what would be the cause for No hire, because interviewer didn't even hint towards anything and was aligned throughout.
    The experience was really heartbreaking. SO, I just wanted to leave it out here that even though I did my best, it wasn't my day (I guess).
    thanks for your videos, nonetheless

    • @hello_interview
      @hello_interview  2 місяці тому

      So sorry to hear that, that’s such disappointing news to receive. It’s always a toss up. Keep your head high and best of luck with future endeavors 💪

  • @jk26643
    @jk26643 3 місяці тому +5

    Please please keep posting more! It educates so many people and you make the world better!! :) Absolutely the best system design series!

  • @randymujica136
    @randymujica136 Місяць тому +1

    In my opinion one of the most important bullets of your strategy is how you minimize the initial HLD and you make sure you deliver something that actually covers all the functional requirements. I find this calibration really valuable and not that easy to achieve, since as a Senior candidate, one can be tempted to go straight to deep dives without actually setting clearly that pause from HLD to deep dives.
    What do you recommend to get better at this?

  • @rupeshjha4717
    @rupeshjha4717 3 місяці тому +4

    Bro, pls don't stop posting this kind of contents, really loved it so far with all of your videos.
    Able to relate with the kind of small impactful problems and solutions you mentioned during your videos, which indirectly impact the interviews

  • @qwer81660
    @qwer81660 3 місяці тому +5

    By far the most inspiring, relevant and practical system design interview content. I found them really useful to perform strongly in my system design interviews

  • @sharanya_sr
    @sharanya_sr 2 місяці тому +2

    Thank you for the great content and congratulations for making it a goto channel for system design. Content is refreshing and watch once never forget types. I request you to make a content to share how to approach a problem that we have not seen before. What best we could do like either map it to any related system or think logically how api/design would work focusing on the problem asked.

  • @omerfarukozdemir5340
    @omerfarukozdemir5340 2 місяці тому +3

    Great content as always, thank you! Some comments about the design.
    1. Concurrency within a crawler is going to bring a huge performance bonus.
    2. Running an async framework for network io is much more faster than using threading.
    3. We can put the retry logic within the crawler to make things simpler.
    4. DNS caching looked like overengineering because DNS is already cached on multiple layers, programming language, OS, ISP and etc.
    5. We're processing the html in another service but we're hashing the HTML in the crawler, that seems wrong.

    • @Dao007forever
      @Dao007forever Місяць тому

      5. You don't want to put the same content into Blob. We are IO bound, compute a rolling hash (SHA) is cheap.

  • @davidoh0905
    @davidoh0905 3 місяці тому +2

    This is such a great example for any kind of data application that needs asynchronous processing! Widely applicable!

  • @brijeshthakrar2106
    @brijeshthakrar2106 15 днів тому +1

    I've been building a web scraper on my own and using similar logic, and after a month, I see this.
    I swear to god this helped me a lottttt, but honestly, it's good that I didn't see this on day 1. Otherwise, I would not have learned things on my own.
    Great job, guys.
    PS: I got to know about you from Jordan. Keep posting great content, both of you guys!!!

  • @AlexZ-l7f
    @AlexZ-l7f 3 місяці тому +3

    Again the best System Design interview overview I ever met. Please keep doing it for us!

  • @alirezakhosravian9449
    @alirezakhosravian9449 3 місяці тому +2

    I'm watching your videos to get prepared for my interview 4 days later, I hope I'll be able to handle it :DDD , so far the best SD videos I could ever find on youtube.

  • @TheKarateKidd
    @TheKarateKidd 3 місяці тому +1

    One of the first things that came to mind in the beginning of this problem is dynamic webpages. Most websites don't display the majority of their content on simple HTML. To be honest if I was interviewing a senior or above level candidate, not mentioning dynamic content early on would be seen as a red flag. I'm glad you included it at the end of your video, but I do think it is important enough to be mentioned early on.

  • @davidoh0905
    @davidoh0905 3 місяці тому +1

    If Kafka does not support retry out of the box, what does that exactly mean? if you do not commit, does it not get move the offset, which could potentially serve as retry like(?) Also, could you compare this with some other queueing service that allows for retry like SQS maybe? Comparison on when to use Kafka vs SQS would be really good too! message broker vs task queue might be their most frequent use cases but might be good to provide justifications in this scenario!

  • @akshat3106
    @akshat3106 2 місяці тому +1

    I could not find the information where it is mentioned that aws sqs have inbuilt exponential backoff retry mechanism. Can anyone please share the link for the same. Thanks a lot!

    • @hello_interview
      @hello_interview  2 місяці тому

      On mobile but scroll through the comments. I linked the aws docs in response to another comment.

    • @akshat3106
      @akshat3106 2 місяці тому

      @@hello_interview Thanks for reply, but could not find it

    • @fran_sanchez_yt
      @fran_sanchez_yt Місяць тому

      @@hello_interview I haven't been able to find the link and I also wasn't able to find this exponential back-off feature mentioned in the SQS docs...

  • @zy3394
    @zy3394 3 місяці тому +1

    love your content , learned a lot, please keep updating more. ❤

  • @itayyahimovitz86
    @itayyahimovitz86 7 днів тому

    Great video! I would probably add a proxy component to this system design for the part where the crawler makes the HTTP calls to fetch the HTML (maybe for the DNS lookups as well).
    This is a critical part of designing a web crawler because you want avoid making the calls through the network where the web crawlers are deployed for case you get all your network ip addresses blocked and also for security reasons, you want to isolate the outgoing network calls from your instances.

  • @damluar
    @damluar 14 днів тому

    To avoid batching URLs from the same domain together, can we use Kafka partitions and spread messages by hash(URL)? Since different crawlers work at different paces, it is likely they will pick up those URLs at a different time.

  • @TheKarateKidd
    @TheKarateKidd 3 місяці тому +1

    This is the first video of yours I watched and I loved it. Your pace is just right and you explain things well, so I didn't feel overwhelmed like I usually do when I watch systems design videos. Thank you!

  • @t.jihad96
    @t.jihad96 6 днів тому

    Thank you for the effort, please keep doing the good job. I'm watching your videos as if it was a Netflix series, very exciting. I was hoping to cover some topics like if the crawler processed the message and failed to commit back to the queue that it processed the message due to a crash, how would you handle such a case? Is there a generic solution where it can be used in different systems instead of workarounds?

  • @kunliu1062
    @kunliu1062 28 днів тому

    Wow, wish I had found this much earlier.
    Now I certainly wouldn't just go into my next interview and throw the bloom filter onto the diagram without deep thinking 😝

  • @damluar
    @damluar 15 днів тому

    How would you choose the initial frontier URLs? How many should be enough?

  • @swagatrath2256
    @swagatrath2256 Місяць тому

    Very well explained!
    If possible please do share some tips on how one can keep up with latest technologies and develop a mindset towards such system designs.
    I feel like I'm good at coding but not that great when it comes to designing architecture like this.
    Basically what I'm looking for is how does one progress from a Developer role to an Architect role..

  • @xymadison
    @xymadison Місяць тому

    This is awesome, it is a very comprehensive and clean explanation, I've learnt a lot from your videos, Thanks. May I ask what tool or website you use as the white board?

  • @rahulrandhiv
    @rahulrandhiv 2 місяці тому +1

    I am watching this during the wait time for by flight back to home from GOA :) and completed it

  • @sushmitagoswami2033
    @sushmitagoswami2033 Місяць тому

    Excellent video!. Have one thoughts - would it be possible to increase the font a bit? Thanks so much!

  • @deshi_techMom
    @deshi_techMom 21 день тому

    I absolutely love the details you talk and you have great presentation skill! super admirable! you just made system design interview easier for me

  • @mularys
    @mularys 3 місяці тому +2

    Here are my concerns: your solution is so nice, but if everyone is going to talk about the same thing during the interview, especially when one is driving the process, will it raise any red flags on the hiring committee side as they might think candidates are referring to the same sources?

    • @hello_interview
      @hello_interview  3 місяці тому +7

      This is not meant to be a script. If your plan is to regurgitate this back to an interviewer I’d recommend not doing that. Instead it’s a teaching resource to learn about process, technologies, and potential deep dives. If you get this problem, then sure, talk about some of this stuff, but also let it be a conversation with the interviewer

    • @rostyslavmochulskyi159
      @rostyslavmochulskyi159 3 місяці тому

      But if there an issue if you answer all/most of interviewer questions correctly? I believe it is an issue if you memorise this, but can’t go any further, but if you can there is nothing wrong.

    • @mularys
      @mularys 3 місяці тому

      @@hello_interview Yeah, makes sense. You present a good framework to structure the talking points that candidates can bring up. And I found it pretty useful. My system design question is the top-k video and I followed the key points you mentioned. My target is E5 and the interviewer just had a handful of follow-up questions (90% of the time I was talking). Eventually, I passed that round with a "strong hire". Of course, I added my points of view during the interview, but I feel like I was just taking something off the shelf.

  • @krishnabansal7531
    @krishnabansal7531 3 місяці тому

    I hope someone asks me Web Crawler question.

  • @flyingpiggy741
    @flyingpiggy741 Місяць тому

    Why do we need a DNS server? Would it be enough to grab text from a url?

  • @kamakshijayaraman3747
    @kamakshijayaraman3747 19 днів тому

    iam not able to understand the math. for no of aws instances. can someone explain?

  • @Global_nomad_diaries
    @Global_nomad_diaries 3 місяці тому +1

    Can this be asked in product architecture interview at Meta or just system design?

    • @hello_interview
      @hello_interview  3 місяці тому

      Should be system design not product architecture in meta world. But, you never know, some interviewers go rogue.

  • @chongxiaocao5737
    @chongxiaocao5737 3 місяці тому +1

    Finally a new update! Apprecaite!

  • @krishnabansal7531
    @krishnabansal7531 3 місяці тому

    Suggestions:
    Please mention what are the clarifying questions to be asked for a specific problem. Even if the problem is well known, the panel still expects to ask few clarifying questions, specially for a senior candidate.
    Also, if you can cover company specific expectations (if any) for top MAANG companies, that would be excellent.

  • @TimothyZhou0
    @TimothyZhou0 19 днів тому

    Damn this is extremely nuanced. Some of the big-picture improvements (like adding the parsing queue) seemed kind of obvious, but then Evan would optimize it with a neat detail (e.g. including link in request so we don't have to fetch from database) that was so simple and yet hadn't occurred to me. Great series, great content, thanks so much!

  • @georgepesmazoglou4365
    @georgepesmazoglou4365 3 місяці тому +1

    Great design! I wonder why there was never a mention of doing the whole thing with spark, using offline batch jobs rather than realtime services?

    • @afge00
      @afge00 3 місяці тому

      I was thinking about batch as well

    • @hello_interview
      @hello_interview  3 місяці тому

      Interesting. You know, as many times I’ve asked this, no one has every proposed it. Top of my head I see no obvious reason why you couldn’t get it to work, especially for just a one off.

    • @georgepesmazoglou4365
      @georgepesmazoglou4365 3 місяці тому +1

      @@hello_interview I do crawling for a large company, typically you would do something like the video's design when you care about data freshness, if you don't care about that, like the LLM use case you, would do a sparky thing where you just split the work to a bunch of workers, you can have the html fetching and processing parts in different stages. Your inputs can be the URLs and previous crawled pages and join them, so that you crawl only new urls, or recrawl URLs only after some time since their last crawl. The main disadvantage compared to your design is that you are not as fault tolerant as you can't do much in terms of checkpointing. Also it is less fun to discuss:)

  • @letsgetyucky
    @letsgetyucky 3 місяці тому +2

    commenting for the algo. thanks for excellent and free content!

    • @hello_interview
      @hello_interview  3 місяці тому

      Legend 🫡

    • @letsgetyucky
      @letsgetyucky 3 місяці тому +1

      ​@@hello_interview Feedback: really enjoyed the video! Would love if future videos were also mostly skewed towards deep dives. Suggesting other topics to research yourself (or hash out with others in the comments) is also super valuable. Finally, calling out the anti patterns that are being regurgitated (e.g. bloom filters) is very valuable as well.

    • @davidoh0905
      @davidoh0905 3 місяці тому

      @@letsgetyucky is bloom filters a anti-pattern!? just curious!

    • @letsgetyucky
      @letsgetyucky 3 місяці тому

      @@davidoh0905 during the deep dive Evan says that Bloom Filters are commonly used in the interviews because it's they are used in solutions in the popular interview prep books. But the interview prep books don't do a great job of discussing the tradeoffs behind using a Bloom Filters vs more practical solutions. It's a nice theoretical solution, but in a real world system you could do something simpler and just bruteforce the problem.

  • @prahaladvenkat
    @prahaladvenkat Місяць тому

    Your channel is a gold mine! Thanks a ton.
    How to decide whether to use Kinesis data streams or SQS? Although they serve different purposes, it feels like both are good options to begin with, generally. Here, SQS ended up being a better option because of retries, DLQ support, etc. But ideally, I'd like to be able to deterministically and correctly choose the right option in the beginning itself.
    It'll be super helpful if you could quickly reason out in the videos (in just 1 or 2 lines) why you pick a certain offering over other seemingly similar technologies/offering!

  • @BhaskarJayaraman
    @BhaskarJayaraman Місяць тому

    Great content.
    In deep dives around 52:41 "when you get a new URL you'll put it on here it'll be undefined and then when we actually parse it we'll update this with" and 52:46
    "the real last craw time and with the S3 link which also would have been undefined so that would handle that" - I think you mean -- when we actually crawl and download it, we'll update it with the last crawl time and with the S3 link.
    Also when you use Dynamo the look up will be Log(1) not Log (n). Would be great if you had the DynamoDB GSI schema.

  • @dibll
    @dibll 3 місяці тому

    Hope you can create videos of the write ups done by other authors on HelloInterview in the near future. Love the content. Thank you!!

  • @technical3446
    @technical3446 Місяць тому

    Few inputs:
    - Bandwidth calculation need to factor in upload data to S3 as well. You will probably also do some compression while upload, and given HTML data had be fairly highly compressible.
    - At that rate, the system will likely not be network throughput bound, but usually latency and number of connections bound. Assume that each site takes 1 sec to return the web page, so for 10k requests per sec for each node, you will need 10k TCP connections, which if under possible limit but will lead to a number perf issues.
    - Memory requirements: 10k * 2 MB = 20 GB, should be enough, but all of these are GCable. less reusable memory and TCP connection
    - You will likely be better off using a lower node type, around 50 Gbps, utilisation beyond that for a single node is going to be challenging and you will hit other limits.
    - Another optimisation will be to have the parsing and crawling in the same process to avoid passing off the HTML content to a separate process. You can also update the DB in one write with all the links.

  • @Anonymous-ym6st
    @Anonymous-ym6st 2 місяці тому

    thanks for the great content as always! One quick question: for redis and global secondary index comparison, given the data can be stored in the single instance, if we use hash based index (not sure if it is supported by dynamo, but should be supported by MySQL), then it should also be O(1) and redis in this case should be over-engineering a bit?

  • @philopateernabil1421
    @philopateernabil1421 Місяць тому

    Can't we just ignore failed websites, no need to retry as we already having million others to process in the frontier queue?

  • @joo02
    @joo02 18 днів тому

    I confirm your hair and hat didn't have any negative influence in the making of this System Design video.

  • @cedarparkfamily
    @cedarparkfamily Місяць тому

    I still can see the ad here

  • @aforty1
    @aforty1 2 місяці тому

    Thanks for this! As far as checking the hash @ 57:00, wouldn’t we already have the last hash since we had to retrieve that url record before we fetched the webpage because we had to go get the lastCrawlTime?

  • @CS2dAVE
    @CS2dAVE 3 місяці тому

    S Tier system design content! Another exceptional video 👏

  • @eshw23
    @eshw23 22 дні тому

    Evan your explanations are extremely amazing and the best on this channel. Hope to hear more soon.

  • @NeelCrasta
    @NeelCrasta 6 днів тому

    Depth should be on Domain Table instead of URL Table. URLs would be unique, so the depth would not increase. Whereas, the depth will increase on a Domain, and having max depth will restrict us from falling into a loop-trap.

    • @hello_interview
      @hello_interview  6 днів тому

      True! Might’ve mistyped/misspoke. Thanks!

    • @NeelCrasta
      @NeelCrasta 6 днів тому

      @@hello_interview Your system design videos are amazing.

  • @zfarahx
    @zfarahx 3 місяці тому +1

    Another bump for the algo!

  • @tushargoyal554
    @tushargoyal554 Місяць тому

    I usually refrain from commenting but this is by far the best explanation I can find for this problem statement.
    I work at Amazon, the use of message visibility timeout for exponential backoff is exactly what we do to add a delay of 1 hour for our retryable messages. One very minor practical insight is to not use the metric approximate message receive count because it is almost always incorrect because the count goes up if a thread reads the message but doesn't process it. I used a retry count attribute while putting message in the queue and checked whether it exceeds the retry threshold.

    • @hello_interview
      @hello_interview  Місяць тому

      Super cool and good to know! Appreciate you sharing that

  • @PiyushZambad
    @PiyushZambad 2 місяці тому

    Thank you for making these videos so engaging! Your eloquent and logical style of explaining the concepts makes watching these videos so much fun.

  • @davidoh0905
    @davidoh0905 3 місяці тому +1

    Just in time!!!!

  • @bobberman09
    @bobberman09 3 місяці тому

    can you post the 2nd top voted one (youtube) earlier? At least written version :) Also very interested in the stock exchange question, but I see that's further down.

    • @hello_interview
      @hello_interview  3 місяці тому +1

      The written coming this week or early next at the latest! Almost done :)

    • @bobberman09
      @bobberman09 2 місяці тому

      @@hello_interview Looking forward to it :) Love the videos btw, feel like its the only system designs I can trust for interview prep

  • @allenliu1065
    @allenliu1065 Місяць тому

    Best explaination for bloom filter, redis set and hash as GSI.

  • @perfectalgos9641
    @perfectalgos9641 2 місяці тому

    Thanks for this video. This video is one of the best in the internet for crawler system design. With a full preparation you are going to an hour, how to manage it in 35mins of 45mins interview.

    • @hello_interview
      @hello_interview  2 місяці тому

      Yah the hour here because of all the fluff and teaching. This is reasonably 35 without that.

  • @zayankhan3223
    @zayankhan3223 3 місяці тому

    This is one of the best system design videos on the interview. Kudos to you. I would like to understand a little more on how do we handle duplicate content? What if the content is 80% same on two pages? Hash will work only when pages are exactly the same.

  • @tomtran6936
    @tomtran6936 2 місяці тому

    what is the tool are you using to draw and take note , Evan?

  • @praneethnimmagadda1938
    @praneethnimmagadda1938 2 місяці тому

    Just wondering , there is no mention related to inverted index in this crawling flow as this inverted index would help during the searches ?

  • @ehudklein726
    @ehudklein726 6 днів тому

    good stuff!

  • @fufuhu148
    @fufuhu148 Місяць тому

    I am not entirely sure I agree with the trade-off discussion between Bloom-filter vs Hash(GSI).
    Hash collisions can occur, which means we can still receive false positives with GSI hashes.

    • @fufuhu148
      @fufuhu148 Місяць тому

      I think it might be necessary to consider byte-by-byte checking when we find a hashing match, to make sure its not just a hash collision.

    • @hello_interview
      @hello_interview  Місяць тому

      Hash collisions will almost certainly not occur. They’re so rare they’re not worth designing around for a system like this, where the consequence is minor. It’s 1 in 340 undecillion chance lol

    • @fufuhu148
      @fufuhu148 Місяць тому

      @@hello_interview I agree with you. My point was hash collision is as likely as false positive in bloom filter

  • @Analytics4u
    @Analytics4u 2 місяці тому

    There is no mention of shardimg here ?

    • @Analytics4u
      @Analytics4u 2 місяці тому

      I like the deep dive section

  • @lixinyi7734
    @lixinyi7734 Місяць тому

    what is the Text editor you are using? I like it

  • @happybaniya
    @happybaniya Місяць тому

    Best❤

  • @RafaelDHirtzPeriod2
    @RafaelDHirtzPeriod2 Місяць тому

    So sorry for being Microsoft Word, but on all of your videos THE APROACH is spelled incorrectly. Thank you so much for posting all your videos. Super helpful for all of us interviewees out there!

    • @hello_interview
      @hello_interview  Місяць тому +1

      🤦🏻‍♂️first person to notice this. Will fix next video!

  • @vimalkumarsinghal
    @vimalkumarsinghal 3 місяці тому

    Thanks for sharing the SD on web crawler.
    question : - how to consider dynamic pages / sub domain / url which loop back to same url / url with query string what the best approach to identify duplicate.
    thanks

    • @hello_interview
      @hello_interview  3 місяці тому

      May not totally understand the question, but you could just drop the query strings from extracted urls

  • @evalyly9313
    @evalyly9313 3 місяці тому

    So for being able to give the right estimation of the back of the envelope calculation, the base knowledge is that the person knows that an AWS instance capacity is 400Gbps. I don't have this knowledge in mind, is that ok we can ask or search during interview or is this something we should keep in mind?

    • @hello_interview
      @hello_interview  3 місяці тому

      I think it’s useful to have some basic specs as a note maybe on your desk when interviewing. But it’s also ok to ask. The intuition that caches can have up to around 100gb and dbs up to around 100TB is good intuition to have though.

  • @dibll
    @dibll 3 місяці тому

    Not related to this Video in particular but I have question about partitioning - Lets say we have a DB with 2 columns firstname and lastname. When we say we want to prefix the partition key which is firstname with lastname, Does that mean all similar lastnames will be on same node , if yes what will happen to firstNames how they will be arranged? Thanks

    • @hello_interview
      @hello_interview  3 місяці тому

      if the primary key is a composite of first and last then no, this just means that people with the same first and last name will be on the same ndoe

  • @sanketpatil493
    @sanketpatil493 3 місяці тому

    Can not thank you enough for all this valuable content. Just amazing work!
    Btw can you share some good resources for preparing for the system designs interview? Books, courses, engineering blogs, etc.
    A dedicated video would be much more helpful!

    • @hello_interview
      @hello_interview  3 місяці тому

      Im certainly biased, but i think our content is some of (if not the) best out there. so I would start at www.hellointerview.com/learn/system-design/in-a-hurry/introduction.
      Some useful blogs on system design too depending on your level which can be found at www.hellointerview.com/blog
      all written by either me or my co-founder (ex meta sr. hiring manager)

  • @yottalynn776
    @yottalynn776 2 місяці тому

    Very nice explanation! When actually crawling the pages, it could be blocked by the website owner. Do you think we need to mention this in the interview and provide some solutions like using rotating proxies?

    • @hello_interview
      @hello_interview  2 місяці тому +1

      Good place for depth! Ask your interviewer :)

  • @shoaibakhtar9194
    @shoaibakhtar9194 14 днів тому

    I gave the meta interview last week only and I was able to crack it. All thanks to you brother.
    The system design round went extremely well. I followed the exact same approach in all the questions and everything went really well.
    Keep posting the videos, these are the best content over the internet for system design.

    • @hello_interview
      @hello_interview  14 днів тому +1

      Let’s go!!!! Congrats! Thrilled to hear that. Well done 👏🏼

  • @nanlala3171
    @nanlala3171 3 місяці тому

    I saw you used many AWS services during your design. Is it a good practice to use specific products and their features (dlq/SQS, GSI / dynamo db) in the design? What if the interviewer never used these products and had no concept of these services/features.

    • @hello_interview
      @hello_interview  3 місяці тому +2

      Depends on the company, in general, yes. But, importantly, don't just say the technology. This important part is that you understand the features and why they'd be useful. For example,
      Bad: I'll use DynamoDB here
      Good: I need a DB that can XYZ. DynamoDB can do this, so I'll choose it.

  • @vamsikrishnabollepalli4908
    @vamsikrishnabollepalli4908 3 місяці тому

    Can you also provide system design interview flow and product design interview flow for each problem?

    • @hello_interview
      @hello_interview  3 місяці тому

      They're mostly the same tbh. www.hellointerview.com/blog/meta-system-vs-product-design

  • @mdyuki1016
    @mdyuki1016 3 місяці тому

    what's the reason not storing URLs in databases like MySQL. for retrying, just add some column like "retry times"

    • @hello_interview
      @hello_interview  3 місяці тому

      I mention this at somepoint I believe when discussing the alternate approach of having a "URL Scheduler Service." They have to get back on the queue somehow, so either directly or via a scheduler where state is in the DB.

  • @Sandeepg255
    @Sandeepg255 3 місяці тому

    I think at 39:03, you are saying that set the visibility timeout of the message to now - crawlDelay, but visibility timeout concept is for a queue, then how are you planning to set it at message level ?

    • @hello_interview
      @hello_interview  3 місяці тому +1

      You can set them at the message level with SQS! From the docs, “Every Amazon SQS queue has the default visibility timeout setting of 30 seconds. You can change this setting for the entire queue. Typically, you should set the visibility timeout to the maximum time that it takes your application to process and delete a message from the queue. When receiving messages, you can also set a special visibility timeout for the returned messages without changing the overall queue timeout.”

  • @healing1000
    @healing1000 3 місяці тому

    Thank you!
    to avoid duplicate URLS, do we need to discuss using a cache or Is it ok to only use the data base

    • @hello_interview
      @hello_interview  3 місяці тому

      Same convo as the duplicate content. Cache is certainly an option. The DB index enough imo.

  • @bhaskardabhi
    @bhaskardabhi 3 місяці тому

    Wont there be a case that even though HTML will be diff but the hash will be same? is it even possible?

    • @hello_interview
      @hello_interview  3 місяці тому

      Not worth even considering. Hash collisions are so unlikely they’re not worth discussing

  • @trueinviso1
    @trueinviso1 3 місяці тому

    I wonder if questions about the type of content we are scraping matters? i.e. ignore suspicious sites or offensive content

  • @dhanyageetha1519
    @dhanyageetha1519 2 місяці тому

    Kafka also support configurable exponention back off from producer side

    • @hello_interview
      @hello_interview  2 місяці тому

      Yup, that’s just to make sure the message gets on the queue, so not the same problem we’re solving here.

  • @theoshow5426
    @theoshow5426 Місяць тому

    Great content! Keep it coming!

  • @mihaiapostol7864
    @mihaiapostol7864 2 місяці тому

    hello, i enjoyed your content a lot, i'm learning a lot from it, thannks!
    one question related to the design, you were talking at minute 52:00 that the check that the urlLink already exists should be done in the parser. but if this uniqueness check is not done earlier in the crawler, then the crawler could save the same text in s3 twice for the same urlLink, right?

    • @hello_interview
      @hello_interview  2 місяці тому +1

      Nope! We won’t add new links to the queue if they already exist. Thats why we check in the parser

    • @mihaiapostol7864
      @mihaiapostol7864 2 місяці тому

      @@hello_interview understood, thank you!

  • @undercovereconomist
    @undercovereconomist 3 місяці тому

    Wow, the amount of Depth here is absolutely insane. How can you compressed so much information into a 1 hour interview? I learn so much information from this video that I never see else where, and it is all presented so elegant and natural. The speaker speaks clearly, no ums and ahs, no speed up? You must be a great engineer at work!
    One thing that I am a bit unsatisfied is about duplicated content. Is it even possible that we actually have completely duplicated content? Even when there are two different web pages, I think that they might just have a few location that the content is different. That would completely break our hash function right?
    Do you know of any hash function that would allow two webpages that are mostly similar to be close together? Do you see any role in word2vec or vector storage here?

    • @ronakshah725
      @ronakshah725 3 місяці тому

      I think this is a great question! I want to attempt to answer this, but I’m no expert haha.
      As the goal of this particular system is to train language models, it’s nice to understand if optimizing for “similar” web pages is necessary for our top level goal.
      In general, it could be helpful to prioritize learning based on chards of text, that appear in many pages. But we have to remember that connecting back to the source could also be required later, for things like citations. So we have to be a bit smart about this. TL;DR it’s a can of worms and I would try to better understand the priority of this compared to existing requirements of the system.

    • @ronakshah725
      @ronakshah725 3 місяці тому

      This isn’t skirting off the question, but it’s a good step towards delivering our final solution.

  • @Ryan-g7h
    @Ryan-g7h 2 місяці тому

    Which drawing tool is this?

  • @annoyingorange90
    @annoyingorange90 3 місяці тому

    really good video but please stop panning uselessly :D appreciate ur work!

  • @vzfzabc
    @vzfzabc 3 місяці тому

    Nice, thanks for the content. I also really appreciated the videos from the mock interview. I found that much more useful and would love to see more of those.

    • @hello_interview
      @hello_interview  3 місяці тому +1

      Tougher there for privacy reasons. Requires explicit sign off from coach and candidate, but I'll see what I can do :)

  • @serendipity1328
    @serendipity1328 23 дні тому +1

    why is it called frontier queue? Is this some kind of standard term?

    • @damluar
      @damluar 15 днів тому

      I believe the term comes from BFS where we have a frontier of nodes and we expand the frontier as we go.

  • @tori_bam
    @tori_bam 3 місяці тому

    thank you for another amazing contents! I'll be having a mock interview using Hello Interview soon.

  • @mohitaggarwal949
    @mohitaggarwal949 3 місяці тому

    If we store Hash in URL table in DynamoDB , how does it handle a case of copied webpages which will have different URLs and same HTML ?

    • @hello_interview
      @hello_interview  3 місяці тому

      Check the hash before storing in s3 and putting on parsing queue

    • @shyamvani
      @shyamvani 3 місяці тому

      you need to store the hash of the page contents for the url and not the hash of the url itself.

    • @HandyEngineering
      @HandyEngineering 2 місяці тому

      I was going to ask the same question there - you can not avoid downloading by using a hash of the content 😊
      You can use this hash to mark duplicates and not store the text output N times, true...
      You also mentioned PK lookup before going into hash and said log(N), obvious typo.
      Great content overall