Thank-you for taking the time to "make things a little bit bigger". So many channels have tiny fuzz in the corner of the screen and a huge empty space.
This analysis is on the client side. Allows you to continue checking the APIs for any exploits. That allows you to find the connection medium that exchanges client data to the server
In most cases SSR is just for the first page, so robots get their mouth filled with right stuff. Next pages are hydrated on the client side over the API. This is the evolved pattern.
I’m new to data scraping, so please excuse my lack of knowledge, but I wanted to ask: since SSR delivers fully rendered content directly to the client, wouldn’t it be simpler to scrape data from SSR websites compared to CSR?
@@pedrolivaresanchez No, CSR pages typically include endpoints that return clean, structured data in formats like JSON (as demonstrated in the video). In contrast to SSR, where you need to parse through HTML to extract the desired data (which also includes a bunch of unwanted CSS and JavaScript).
Amazing tutorial as always! Can't wait to try this in production! For any potatoes like me on older python versions here are some changes you have to make: 1. import 'from typing import Optional, List' 2. update 'rating: float | None' to "rating: Optional[float] = None'
I also scrape data as a living, particularly job data. This is all great information. Another really good point is sometimes you have to loop over tags In the front end to extract an ID for each item. Building robust solutions that can withstand changes is a learned skill.
@ronburgundy1033 If you're working for yourself, it could be difficult and take some time, You can build up a bunch of data that you have scraped and try to sell the data. You can sell your services to a company who wants something scraped. You can work for a company that does their own scraping. Honestly, there's a lot of ways to go about it but think of it as providing a service and providing data and you can come up with some good Solutions. In regards to learning, find some sites that you want to try and scrape and start there when you have a problem ask on stack overflow or something. There is also no code options like uipath
100% percent agree that front end scraping sucks. I remember having a hard time with python selenium because of different class names being generated with inconsistent names (maybe just to discourage scraping). For my last scraping project I used Deno Typescript. The API was only returning the HTML page for the web app and I had to install a proxy certificate on my phone and read those mobile requests that a actually returned JSON objects. You have to get creative from time to time, but there is no such thing as an unscrapable APIs😅. Thanks for sharing your workflow!
Great content as always, thanks! I'm looking forward to the fingerprint video. If I may make one request, I would love to see a video about decrypting the response when it is encrypted. I’m currently trying to deal with a website like that, and I believe the decryption process must be hidden somewhere in the JavaScript since I can see the data on the website but can’t figure out how to crack it. Thanks again for your videos, man.. I really appreciate them!
@@brendanfusik5654 thanks for your reply.. my problem was in the end actually encoding with base64 and protobuf layers, and not encryption.. but thanks anyways
@@darz_k.true, at least he didn’t ask for a use case for data collected this way - an actual question worth criticism for lacking creativity. His was valid, technical
Very interesting. I didn't know about the TLS fingerprinting (but I did know about other kinds of fingerprinting). I agree that most sites are probably fairly easy to scrape but some seem straight impossible. There was one site that I couldn't get around with. It's anti-bot protection was super good. Scraping is such a deep and deceiving topic. It looks simple but there's so much behind it.
It is a bit of reverse engineering. At least in perspective of not having docs and exploring related endpoints to obtain the required data. He even touches on the request params of start, etc to explore options not used on the site. Ive done something similar to explore a bunch of search indexes that had led me to more data than was initially available on the websites.
New to your channel. I really like your videos. Straight to the point with no fluff. I've always had a bit of a weird habit of running apps through packet sniffers just to see their API requests. I found it fascinating. Although I never really did anything with them. I've noticed that many modern websites like Instagram dynamically load data in a weird way that that cannot be seen using the inspector. Do you have a video on this?
I'm learning how to scrap and this is a great start, thank you! When scraping my chosen webpage and pasting the query string into my browser, there is only 5 items despite the website having dozens. Is this something I need to fix?
Well client side apps with an api is really easy like you shown, usually its server side pages where you can't grab the data from any api or xml request, so you really have to scrape whatever data between the html elements you get.
Assume this relies on the site being a spa and having json sent? I'm looking at a site that seems to respond with html :/ Think that would also apply to SSR sites, right?
@JohnWatsonRooney I am finding some JSONs now, thank you. One issue I am running across, is it's not consistent. I have found about two items with this information loading but the rest don't have them. Why might this be? I do see a GET with a 404 called "current.jwt?app_client etc." do you have any videos on possible road blocks to scraping sites, in the context of the type of scraping you use in the video?
Great video! How could one go about scraping password-protected sites in this fashion? Because whenever I try to access apis of such sites by opening them in a new tab I only get an error message saying that I'm not authorized to do so.
with the websites i try to scrape, i can find interesting "responses" like you've mentioned by monitoring network traffic, but when i try to directly access that API request URL in my browser, I will encounter variations of this: ""message": "403 Forbidden - Valid API key is required"".... does this just mean my target websites are intentionally preventing webscrapers from accessing them in this way? What I am doing is using playwright to tediously navigate through every page and scrape the content of each page...
John truth be told you are very good at your craft but you have never done an End to End project with deployment and APIs that is been hosted... you may want to look into that if only you can do a end to end project that is deployed with automated scraping using cron jobs as a scheduler, trust me it will boost your viewership
I've been trying to scrape some data through an API. But after each hour the cookie needed in the headers expires. How can i extract the cookie automatically instead of manually copying it from the latest curl?
I am a passionate webscraper as well with a few years of experience. Hardest thing to scrape in my view is online PowerBI tables (publicly avaliable data), its almost impossible to fetch the data as the backend doesn't reponse. Have you cracked it? If so, could you make a video of it some day?
Its good for small websites but what about linkedin and other big data websites. You can't reverse engineer beacuse there is no hr file. How can we reverse engineer them.
If the owner also intended that you watch ads or make purchases along with viewing the data/content, that is not covered by copyright laws. It is similar to using ad blockers and so far no major ruling has been made defining it as illegal.
When it would likely become illegal is if you use the collected data for other commercial usage not approved by the owner. But that would apply even if you are manually visiting each page and copy pasting the data into an excel sheet to sell it.
Hi Johan, thank you for the great videos. I have a RAG project(ai assistant for an English aticle website(for English language learners) that I need to use all articles as a vector database for my RAG agent . How should I automate this for free? Is there a free ai Webscrapper to build an ai assistant? Or better to code an ai scapper from scratch instead of using an external platform to automate this for my project.
Hi John. Thank you so much for these videos. It enabled me to actually create something without looking at thousands of html. One question though, there seem to be some apis that are invisible in the inspect, however I know it is there. Is there a way to uncover these hidden apis?
I have one more question: Do we need to get permission from any website or contact them via email before webscapping of their content? Sometimes their guidelines and terms of use are vague. Do you take permission your videos? I ask because I want to use their data to feed into a RAG project to use as a vector data repository for semantic search for ai.
Yes you absolutely need to get permission. This is their site, they built it, it's their data not yours. "How I STEAL data from 99% of sites" is the correct title for this video... What a scum you are, John. Build your own app instead of basing it on theft.
Technically scrapping itself is fine if you are using publicly available data and not using stolen or leaked keys. The opinion is based on the ruling won by youtube-dl where the judgement was there was nothing wrong in saving the videos as long as the purpose is the same. The part to consider is whether to use it for other commercial use. Then it's better to consider a lawyer or reach out to the company for access to their data. Especially if GDPR or other similar laws might apply they have to ensure certain protections such as delete data on request. That cannot be satisfied if they are giving it out to companies to train their models on. But again I don't know if they would apply data being accessible to the general public, so consider based on your circumstances.
great help your tutorials! alot of sites switching to cloudflare and they detect scraping alot of the times. do you have any tutorials on hls dash segmented video?
I really liked the video and I noticed that a lot of it is reverse engineering of the site or APIs. But what can I do when I experience blockages because the site uses cloudware for example? Thank you very much for your contribution!
I'm scraping data from a shipping line's website, but I need to login to get the bearer token and enter that into my python code to all the API calls to work. I need to be able to login via python, and obtain the access token, is this possible?
The best part of all of this is the scammers loss aversion being used against them in the same way they use it against victims. Unlike the normal scambait shenanigans they probably feel an immense sense of loss afterwards since they already feel like the money is theirs. Overall really entertaining
This is gold. You have shown your thought process and by following it I can pick up the whole web scraping concept easily. Love your video John.
You are the best teacher to learn scraping
Thank-you for taking the time to "make things a little bit bigger". So many channels have tiny fuzz in the corner of the screen and a huge empty space.
This technique kind of only works for Client-Side Rendered sites. Not SSR sites (server side)
This analysis is on the client side. Allows you to continue checking the APIs for any exploits. That allows you to find the connection medium that exchanges client data to the server
It would struggle with HTMX too, heh.
@@abg44 This won't work even for that . Because he will be blocked by anti-bots when hitting non cached data .
Like Blazor wasm? The king of stacks.
Best Web Scraping Channel on UA-cam.
Just scraped a complete site with 70 lines of code.
this technique is really for CSR sites. with more and more sites switching to SSR it's not really possible to just go straight to the APIs
In most cases SSR is just for the first page, so robots get their mouth filled with right stuff. Next pages are hydrated on the client side over the API. This is the evolved pattern.
I’m new to data scraping, so please excuse my lack of knowledge, but I wanted to ask: since SSR delivers fully rendered content directly to the client, wouldn’t it be simpler to scrape data from SSR websites compared to CSR?
@@wkoell "In most cases SSR is just for the first page".
Why talk when you have no idea what you're talking about? 😂
That's exactly what I was going to say.
@@pedrolivaresanchez No, CSR pages typically include endpoints that return clean, structured data in formats like JSON (as demonstrated in the video). In contrast to SSR, where you need to parse through HTML to extract the desired data (which also includes a bunch of unwanted CSS and JavaScript).
Amazing tutorial as always! Can't wait to try this in production! For any potatoes like me on older python versions here are some changes you have to make:
1. import 'from typing import Optional, List'
2. update 'rating: float | None' to "rating: Optional[float] = None'
This is a masterpiece. More videos like this john. The 20 minute videos peppering in the end point manipulation explaination is genius.
Nice I ran into the same curl 403 issue while writing a GoLang scraper and used cf-forbidden to complete my request.
Amazing Reverse Engineering API video!
Your videos are amazing friend XD
I also scrape data as a living, particularly job data. This is all great information.
Another really good point is sometimes you have to loop over tags In the front end to extract an ID for each item. Building robust solutions that can withstand changes is a learned skill.
How can I learn this and do it as a living? Can you make 20k a year ?
@ronburgundy1033 If you're working for yourself, it could be difficult and take some time, You can build up a bunch of data that you have scraped and try to sell the data. You can sell your services to a company who wants something scraped. You can work for a company that does their own scraping. Honestly, there's a lot of ways to go about it but think of it as providing a service and providing data and you can come up with some good Solutions.
In regards to learning, find some sites that you want to try and scrape and start there when you have a problem ask on stack overflow or something. There is also no code options like uipath
yeah I can’t wait to see tls fingerprint video 😆
actually this was the best way of scraping and it also makes the structuring of data easier for me also. i used this method already more than year ago
Thank you for this. Really thorough and excellent introduction into web scraping.
Very awesome john, insight full content, keep it up,
I'm trying to continue watch your almost any video, It's very helpful
Sick video man so easy to understand and execute, loads of ideas coming to mind
Thanks a lot for this John, really helpful brother. Bests
Thanks for this! This is exactly what I needed!
thanks for this! i thought this is yet another BeautifulSoup -type scraping. so detailed explanation
I think, this was your last scraping video. Nothing else has to be told about this topic.
Thank you!
100% percent agree that front end scraping sucks. I remember having a hard time with python selenium because of different class names being generated with inconsistent names (maybe just to discourage scraping). For my last scraping project I used Deno Typescript. The API was only returning the HTML page for the web app and I had to install a proxy certificate on my phone and read those mobile requests that a actually returned JSON objects. You have to get creative from time to time, but there is no such thing as an unscrapable APIs😅. Thanks for sharing your workflow!
Scraping, btw
Looks like your video finally made them add some security to their API. Well done Adidas 🎉😄
Great content as always, thanks! I'm looking forward to the fingerprint video. If I may make one request, I would love to see a video about decrypting the response when it is encrypted. I’m currently trying to deal with a website like that, and I believe the decryption process must be hidden somewhere in the JavaScript since I can see the data on the website but can’t figure out how to crack it. Thanks again for your videos, man.. I really appreciate them!
you need a secret key they keep hidden commonly in .env files not just floating around in javascript.
@@brendanfusik5654 thanks for your reply.. my problem was in the end actually encoding with base64 and protobuf layers, and not encryption.. but thanks anyways
@@brendanfusik5654isn’t that a no-go? Pardon my ignorance
Yo you are the best youtuber, when it comes to scraping
And here I was about to start scraping and parsing HTML tags.
Very informative, thanks! I did not know about curl cffi but definitely going to check it out now.
Top top level materials and content as always. Thanks a lot.
Great video John, thanks!
John, I learned a ton from this and I had a lot of fun. Thanks
Great vid! Easy to follow, and comprehensive!
Another great video; keep up the great work.
Thanks Alan
Nice work mate, cheers for sharing.
You earned a new subscriber!
This just saved me so much python coding and HTML scraping for financial data on interactive sites with Java. God bless you :D
What do you when a website consists of hundreds of static html pages held together with scotch tape and php?
write something to parse and collect from html, hope to hell they don’t change the format of their site
Maybe, build something yourself, and stop consuming other peoples work?
@@darz_k.good advice man… why are we consuming this informative video. It’s not our work
@@viIden Even for a logical fallacy, that's weak.
Must do better.
@@darz_k.true, at least he didn’t ask for a use case for data collected this way - an actual question worth criticism for lacking creativity. His was valid, technical
important to know this only works as long the backend from the site does not have any anti CSRF tokens on the API requests
Another great video!
Thank you.
Very interesting. I didn't know about the TLS fingerprinting (but I did know about other kinds of fingerprinting).
I agree that most sites are probably fairly easy to scrape but some seem straight impossible. There was one site that I couldn't get around with. It's anti-bot protection was super good.
Scraping is such a deep and deceiving topic. It looks simple but there's so much behind it.
what do you do with the data you scrape?
No hate, I enjoy your content but saying "REVERSE ENGINEER" this api isn't the term you can use for projects like these.
Well, he used it so clearly he can 😃
@@JakubSobczak 🤡
Fair enough, i see where you’re coming from. This example was more just seeing and using rather than anything else.
I'd say you're reverse engineering the usage of the API as a client..
It is a bit of reverse engineering. At least in perspective of not having docs and exploring related endpoints to obtain the required data. He even touches on the request params of start, etc to explore options not used on the site. Ive done something similar to explore a bunch of search indexes that had led me to more data than was initially available on the websites.
This is a legit video! 💪💪
Great information and video! I had no idea about TLS fingerprinting.
New to this channel, just wanted to say that your content is so full of quality!!
Just the cureq tip would have saved me a lot of work on figuring out the right headers and cookies for the fingerprint
New to your channel. I really like your videos. Straight to the point with no fluff.
I've always had a bit of a weird habit of running apps through packet sniffers just to see their API requests. I found it fascinating. Although I never really did anything with them. I've noticed that many modern websites like Instagram dynamically load data in a weird way that that cannot be seen using the inspector. Do you have a video on this?
I'm learning how to scrap and this is a great start, thank you! When scraping my chosen webpage and pasting the query string into my browser, there is only 5 items despite the website having dozens. Is this something I need to fix?
The best = John
Well client side apps with an api is really easy like you shown, usually its server side pages where you can't grab the data from any api or xml request, so you really have to scrape whatever data between the html elements you get.
I like your dress up, the earphone, the light and the color of your shirt, it is suitable with the grey background of command line tool
Assume this relies on the site being a spa and having json sent? I'm looking at a site that seems to respond with html :/
Think that would also apply to SSR sites, right?
Yes that’s right, but if it’s ssr look in the page source there’s often a lot of json data in there to save parsing loads of html tags
@@JohnWatsonRooney perfect, thanks :)
What if the XHR requests are hidden, when I go to response, it just says false.
There will be lots of xhr requests - have a look through them all and see if any have the data you need. It doesn’t work for all sites
@JohnWatsonRooney I am finding some JSONs now, thank you. One issue I am running across, is it's not consistent. I have found about two items with this information loading but the rest don't have them. Why might this be? I do see a GET with a 404 called "current.jwt?app_client etc." do you have any videos on possible road blocks to scraping sites, in the context of the type of scraping you use in the video?
Brilliant Video!
Great video! How could one go about scraping password-protected sites in this fashion? Because whenever I try to access apis of such sites by opening them in a new tab I only get an error message saying that I'm not authorized to do so.
Hi, i wanted to follow this tutorial, but it seems that the search json response is no longer available, any thoughts on how to fix that?
with the websites i try to scrape, i can find interesting "responses" like you've mentioned by monitoring network traffic, but when i try to directly access that API request URL in my browser, I will encounter variations of this: ""message": "403 Forbidden - Valid API key is required"".... does this just mean my target websites are intentionally preventing webscrapers from accessing them in this way?
What I am doing is using playwright to tediously navigate through every page and scrape the content of each page...
How do you get around servers protected with Same Site Origin checks?
John truth be told you are very good at your craft but you have never done an End to End project with deployment and APIs that is been hosted... you may want to look into that
if only you can do a end to end project that is deployed with automated scraping using cron jobs as a scheduler, trust me it will boost your viewership
Yeah seen this same video a hundred times
I’m prerry good with this format. He is probably one of the few youtubers who cover latest scraping techniques
@@kexec.please share these “latest scraping techniques” you speak of
@@stickyblicky11 Did you even watch the video? 💀
@@kexec.Yeah it’s pretty much standard practices 💀
How woukd you get TikTok ads that are in app? The web doesnt have slonsored vids. Wonder how to scrape these
Basically you need to run a mitm proxy to intercept the requests made by the app. I’ve not done it myself though
Sadly, this has an expiration date. Sites are moving more and more towards SSR and even hydration is sometimes html.
Thanks, that works perfeclty on most of the sites, how to look for api parameters if they are hidden? in a way when the request url has no parameters?
amazing vid. also tell ur dog I said woof
I've been trying to scrape some data through an API. But after each hour the cookie needed in the headers expires. How can i extract the cookie automatically instead of manually copying it from the latest curl?
If you have the curl to get new cookies or keys, why not make that call as part of your script and update the cookies when you get 401?
I am a passionate webscraper as well with a few years of experience. Hardest thing to scrape in my view is online PowerBI tables (publicly avaliable data), its almost impossible to fetch the data as the backend doesn't reponse. Have you cracked it? If so, could you make a video of it some day?
can I hire you to teach my guys how to do 1 website
what we can do with this data ? any idea plzz
Its good for small websites but what about linkedin and other big data websites. You can't reverse engineer beacuse there is no hr file. How can we reverse engineer them.
Hey john still waiting.
Probably best to avoid scrapping websites like LinkedIn unless you want to get banned from the platform or sued
When something is published it can still have copyright and does not allow other use than the owner’s use. Could be illegal.
If the owner also intended that you watch ads or make purchases along with viewing the data/content, that is not covered by copyright laws. It is similar to using ad blockers and so far no major ruling has been made defining it as illegal.
When it would likely become illegal is if you use the collected data for other commercial usage not approved by the owner. But that would apply even if you are manually visiting each page and copy pasting the data into an excel sheet to sell it.
Incrediable as always. Going to AI / DB this - a much better process than Scarpy. Cheers - 100z
is parsing html the best way to scrape server-rendered pages?
Hi Johan, thank you for the great videos. I have a RAG project(ai assistant for an English aticle website(for English language learners) that I need to use all articles as a vector database for my RAG agent . How should I automate this for free? Is there a free ai Webscrapper to build an ai assistant? Or better to code an ai scapper from scratch instead of using an external platform to automate this for my project.
Do you have a github with code examples?
Hi John. Thank you so much for these videos. It enabled me to actually create something without looking at thousands of html. One question though, there seem to be some apis that are invisible in the inspect, however I know it is there. Is there a way to uncover these hidden apis?
Can we do this with LinkedIn?
ROOOOOONEY!
Excellent Work :-)
I have one more question: Do we need to get permission from any website or contact them via email before webscapping of their content? Sometimes their guidelines and terms of use are vague. Do you take permission your videos? I ask because I want to use their data to feed into a RAG project to use as a vector data repository for semantic search for ai.
Unless OpenAI or some other LLM provider loses a lawsuit for scrapping publicly available data, I doubt it should be an issue.
Yes you absolutely need to get permission. This is their site, they built it, it's their data not yours.
"How I STEAL data from 99% of sites" is the correct title for this video...
What a scum you are, John.
Build your own app instead of basing it on theft.
Technically scrapping itself is fine if you are using publicly available data and not using stolen or leaked keys. The opinion is based on the ruling won by youtube-dl where the judgement was there was nothing wrong in saving the videos as long as the purpose is the same.
The part to consider is whether to use it for other commercial use. Then it's better to consider a lawyer or reach out to the company for access to their data. Especially if GDPR or other similar laws might apply they have to ensure certain protections such as delete data on request. That cannot be satisfied if they are giving it out to companies to train their models on.
But again I don't know if they would apply data being accessible to the general public, so consider based on your circumstances.
great help your tutorials! alot of sites switching to cloudflare and they detect scraping alot of the times. do you have any tutorials on hls dash segmented video?
Why is scraping the html not going to work at all?
Can you please make a video of how to handle SSR scraping?
Can you make a video to explain the waterfall stuff at the bottom of (fetch/xhr). I can see whenever you click it comes up as grey
Thanks much for this. Now, I am getting {"error":"Anti forgery validation failed"} on a particular site - any thoughts on how to walk around it?
I really liked the video and I noticed that a lot of it is reverse engineering of the site or APIs. But what can I do when I experience blockages because the site uses cloudware for example?
Thank you very much for your contribution!
hey guys, why i dont see search?q=boots in dev tools ? im newbie, thank for heping.
Just looking for some kinda solution. Great stuffs.
Can u tech us who to scrape website with cart iam work on one since months but i cant add product to cart by requests
I have never seen this approach, but it seems a lot easier than faffing about with website designs and puppeteer or selenium.
@john do you have a course and how can get in touch with you
do you have a course ?
I'm subscribing but show us your dog in the next one! 😅
Haha
What about graphql?
I’ve seen it work the same way but gql is less common and I’ve got less experience with jt
Great content
I'm scraping data from a shipping line's website, but I need to login to get the bearer token and enter that into my python code to all the API calls to work. I need to be able to login via python, and obtain the access token, is this possible?
Try submitting a post request to the auth login endpoint
What Snozcumber said, or you can automate signing with a headless browser and copy the cookies
@@Archbishop-Desmond-Tutu Thanks dude
@@hurtado-w9c cheers, very helpful!
From where do you get web scraping work?
Even my grandma can do this.
How to deploy a selenium script? I couldn't do it.
What about sites without json, fhat just serve a document
Please design a course for vetwrenes not cider to dive into and learn ❤. Ok s suggest tech start to learn and where to start from
The best part of all of this is the scammers loss aversion being used against them in the same way they use it against victims.
Unlike the normal scambait shenanigans they probably feel an immense sense of loss afterwards since they already feel like the money is theirs. Overall really entertaining
Is there a way to bypass mfa/otp when scraping?
I think 99% people need UPC code price tile link