is this really a good use case for bloom filters ? they will have false positive which means they might say something is visited while it is not i.e assuming we keep a list of visited url's. So we will have roughly 0.1 to 1% of URL's which are never visited ! Now since this is a continuous process if there is a way to ensure the values in bloom filters changes with every run so even if something is missed first time in next run it's not automatically missed, this might be a work around.
Thanks for the comment! You wold want the duplicate detection to occur directly after the HTML parser as we don't want to process the same data and extract the same URLs from the same page and that's why the URL Seen Detector and URL filter happen later on in the system. Hope this makes sense!
The UA-cam algorithm has picked up your channel. Really good content
I like that these are short and sweet. It shouldn't take an hour to explain TinyURL or web crawler. Thanks!
Exactly 👍
Excellent! Could also talk about what kind of network protocols will be used for services to talk to eachother?
is this really a good use case for bloom filters ? they will have false positive which means they might say something is visited while it is not i.e assuming we keep a list of visited url's. So we will have roughly 0.1 to 1% of URL's which are never visited !
Now since this is a continuous process if there is a way to ensure the values in bloom filters changes with every run so even if something is missed first time in next run it's not automatically missed, this might be a work around.
Great video man
Is it Font queue prioritizer or Front queue prioritizer ?
Awesome video. Would it make sense for the url seen detector and url filter to come after the html parser step?
Thanks for the comment! You wold want the duplicate detection to occur directly after the HTML parser as we don't want to process the same data and extract the same URLs from the same page and that's why the URL Seen Detector and URL filter happen later on in the system. Hope this makes sense!
How does the design of a web crawler not include geo located servers etc?
During duplicate detection step, how Content Cache is being used? Could someone please explain?