Where can we find out if we are allowed to scrape data from a specific website so that eventually we don't end up in trouble? Does scraping code/process works the same way for scraping product prices, e.g. trying to replicate camel for amazon or that takes additional authorization from amazon?
Excellent question! All popular websites have a scraping/crawling text file called "robots.txt". This tells what can and can't be scraped from a website. Here is an example of Amazon's robots.txt file (spoiler, you can't scrape much) www.amazon.com/robots.txt
@@jimavictor6022 As long as you don't scrape things like other people's documents from governamental sites or usernames plus passwords you should be fine with the rest. What website owners are really worried about are their website availability (whether they are online or offline) and bandwidth usage as they pay X for X amount of gigabytes consumed. (they pay for each gigabyte they send and receive from users) So as long as you don't consciously/unconsciously take down their site you're fine.
@@jimavictor6022 On top of that they have their automated way to detect bots, the worst that can happen is getting your IP "banned" or simply restricted from viewing their webpages, that will happen way, way, way... before you getting sued by them.
cool tutorial :D for more complicated data I use xpath, although its syntax is a bit weird at first. furthermore: validate, validate and validate your data. you do not want a program which crashes randomly, only because a value is missing, empty or malformed :)
Web scraping is to copying and pasting manually, as copying and pasting manually is to using your eyeballs, memorising, then typing it into a file. There is no difference between surfing the web and web scraping. One is just faster. Like how copy/pasting something from Wikipedia is faster than reading and re-writing it.
A survey businessman could use web scraping to scrape a competitors website for product pricing to include product numbers photos prices and then use this to monitor their price changes and or adjust their own prices on their website to stay just a slight bit more competitive
Thanks for sharing the expertise! However, I get the following error when running the code. writer.writerow([quote.text, author.text]) UnicodeEncodeError: 'latin-1' codec can't encode character '\u201c' in position 0: ordinal not in range(256)
currently planning for my computer science A level project and wanted to learn what this web scraping thingamejiggy was all about this video was an amazing introduction! simple, clear, but not over proffessional didn't leave me feeling overwhelmed, and i'm going to watch more of your tuts now, cheers mate!
This channel used to have like 100k views. Now its down to just less than 10k. Idk why. When I was around 13, I wanted to make an fps game and found his video to be very interesting. I follow this channel since then. Tinkernut was the reason I started learning programming. After watching his HTML tutorial (create a website from scratch). Even though I neither have com-sci degree nor working as a programmer, I'm still learning python during my freetime. Thank you Daniel.
Your guide amounts to "download this library that does it for you" which doesn't really teach how the process at hand works. Simplicity at the cost the utility and educational value.
I grew up in the early youtube days. I was a enamored by the computers knowledge that I could only get from channels like Tinkernut. There really was no schools that offered nuanced coding/web lessons when I was growing up. It wasn't until I went to college and got my degree in Computer Science that I'd be able to build a foundation in computational theory and all sorts of other fun subjects related to computers. Thanks for helping me along the way to that journey, Tinker!
Thanks for the vid! After a VERY VERY long time i'm getting back into casual coding and looking to casually make some scraping info programs for games with the option to select which info the person wants to see. So if the site allows scraping would it be better to have my app in progress be independant, have checks done once a minute or every dive minutes? Or have the info scraped, processed and posted on a site i create and retrieved for ppl using the the app? That is if i start shareing the app. My concern is annoying the site owners by checking too often, forgive me if its a silly question, i'm not experiance with scraping.
This is not so easy on windows. Im a beginner at this, but it keeps giving me the "ModuleNotFoundError: No module named bs4". I have spent hours online trying to figure this out.
cant call scraping illegal, thats like saying you cant film in a public place, if they dont want it to be interacted with they can pull it from the web....where it is....in the view of the public...
Ok, so this is amazing, thank you! How would you generalize a scraper like I want to scrape all the news sites in the world and extract the main articles?
wooooow it's been years that i didn't see a video about tinkernut. i think about 10 years ago i learned sql and php with your tutorial about making a webpage with users passwords etc. man so nice to see a video of you.
Thanks, this was very good, can you share any link where you have done the same for teh website which require username and password, can you please share the same, thanks a ton
yes and no, you can check for things like user agent string or try run javascript or something like that, however its actually a really hard problem to solve because a scraping script can look indistinguishable from a browser ..
So glad to see you posting again! I missed your videos so much. I believe my first video of yours was either How to Setup a Webserver or How to Make an Operating System. Both excellent videos!
Great content, thank you. How do we know if a website has a problem with scraping without actually going to jail and having to explain why I destroyed everything in our life to my wife? (Please your honor, I NEED the death penalty for this heinous crime. DO NOT LET THAT WOMAN BAIL ME OUT!)
Every python coding video is just some dood typing out a bunch of random gobbledygook and never explaining how to find the libraries, keywords, and functions in the first place... Why? Is this maybe intentional? Keep everything mysteriously vague.. I dunno 🤷🏼♂️
This is not all you need to know. If the page you're scraping dynamically loads content through javascript, you should use selenium to render that before parsing the HTML.
I had no clue it was this easy, but how do I find out which websites I'm not allowed to scrape? All I get from Google is ways to prevent scraping on my own website (which I don't have, but that's beyond the point).
Awesome video! the code didn't run for me using findALL, but it worked with this... quotes = soup.find_all("span", attrs={"class":"text"}) authors = soup.find_all("small", attrs={"class":"author"})
Great video. With the phrase "web scraper", I can't help but picture a function that returns a digital box chevy with candy paint, 26" chrome rims, tinted windows, and triple 15" subs in the trunk with some Too $hort going. I hope someone else from Northern California is thinking the same thing, and cracks up seeing this. But thank you for your fantastic educational video! cheers.
I just checked a website I want to scalp in a future, but this will be significantly more difficult. I want to get live train schedules but to the live data is inside Java-Script pop-up window.
Thanks a lot for this clear video! How would I retrieve more information associated with the quote? For instance I would like to receive and print both the author and the associated tags.
Love your videos, I don’t understand much of the content, but what’s the difference between taking these quotes via code and just copy pasting into a excel sheet? I’m a noob sorry
You can do it automatically every X amount of time. You can use a "bot" to do something with that data you scraped. I don't use Excel, but if you're talking about what I am thinking, Excel is doing exactly what was talked on this video; web scraping. The thing is that Excel is doing it for you without the need of you programing it first, but that web scraping it does is very, very limited to what tools made for scraping can do.
In practice? Nothing is different, you get the same result. However, let's say you have a website with 2000 quotes and you need to keep a sheet up to date. That's where a scraper would be useful, as its time you really only need to spend once, plus, at that kind of scale it would be faster to write the code than do it manually.
Funny how it's titled Beginners Guide to Scraping and once he's done with the introduction and starts typing a bunch of codes that " beginners" have absolutely no clue how to do... Thanks, man great help!
Is it `quote.text` because in the html, we see itemprop = "text"? If (for example) the html were instead `“The end is only the beginning.”`, would we rock with `quote.banana`?
Man... I've seen other web scraping tutorials and they take you ten miles down the road and through all types of advanced garbage at you. Granted, I know what you have shown here is the quick and easy way, but that's all I have wanted to get an understanding of, what it is, and how it basically works. Thank you.
This editing is fantastic, the explanations are clear and concise and completely without obfuscation. You, sir, are a gentleman.
Big faxxx! so many nonsense intro to scraping vids, but not this one : ))
I’m sorry 😢 I’m not going
Bro this is crazy
I was trying to make a code to get stuff from my math homework website
When world needed him the most, He returned.
if you get an error, try replacing the line of code: file = open('scrapped_quotes.csv', 'w', encoding='utf-8', newline='')
Hey, I'm getting "NameError: name 'page_to_scrape' is not defined"
Lost me when you said and a raspberry pie
Think of it as a mini computer
Great introduction. Clear, concise and covered related topics without being distracting. I look forward to your other videos on Python.
Where can we find out if we are allowed to scrape data from a specific website so that eventually we don't end up in trouble?
Does scraping code/process works the same way for scraping product prices, e.g. trying to replicate camel for amazon or that takes additional authorization from amazon?
Excellent question! All popular websites have a scraping/crawling text file called "robots.txt". This tells what can and can't be scraped from a website. Here is an example of Amazon's robots.txt file (spoiler, you can't scrape much) www.amazon.com/robots.txt
@@Tinkernut what about those non popular websites with no robot.txt file
@@jimavictor6022 As long as you don't scrape things like other people's documents from governamental sites or usernames plus passwords you should be fine with the rest.
What website owners are really worried about are their website availability (whether they are online or offline) and bandwidth usage as they pay X for X amount of gigabytes consumed. (they pay for each gigabyte they send and receive from users)
So as long as you don't consciously/unconsciously take down their site you're fine.
@@jimavictor6022 On top of that they have their automated way to detect bots, the worst that can happen is getting your IP "banned" or simply restricted from viewing their webpages, that will happen way, way, way... before you getting sued by them.
@@JoaoPedro-ki7ct I really appreciate the reply. Thank you..
cool tutorial :D
for more complicated data I use xpath, although its syntax is a bit weird at first.
furthermore: validate, validate and validate your data. you do not want a program which crashes randomly, only because a value is missing, empty or malformed :)
Web scraping is to copying and pasting manually, as copying and pasting manually is to using your eyeballs, memorising, then typing it into a file. There is no difference between surfing the web and web scraping. One is just faster. Like how copy/pasting something from Wikipedia is faster than reading and re-writing it.
Yes, automation is a huge time saver 👍🏾
This is exactly what I was looking for. Very concise and helpful, thank you!
I need more content on Rasberry PICO !!
Our lord has returned.
haha awesome man. I don't even do coding but couldn't resist following along just to try it! Cheers!
This smart man is still alive
I was abput to comment the same lmao.
Very cool project ! I am a beginner in Python and this was right up my alley. I think Data science is going to be my forte. Thanks so much for this !!
We should connect
A survey businessman could use web scraping to scrape a competitors website for product pricing to include product numbers photos prices and then use this to monitor their price changes and or adjust their own prices on their website to stay just a slight bit more competitive
Halloween intro? At the end of November? This videos been a while in the making huh?😂
Thanks for sharing the expertise! However, I get the following error when running the code.
writer.writerow([quote.text, author.text])
UnicodeEncodeError: 'latin-1' codec can't encode character '\u201c' in position 0: ordinal not in range(256)
Your technological code geniusness shall be added to my own. Seriously looking for this. Thanks!
Love the Borg reference XDD
@tinkernut you are the reason for me being a software developer..
Thanks dude. Keep up the good work..
I had to add encoding to the line--- file = open("scraped_quotes.csv", "w", encoding='utf-8')
not for beginners - immediately starts te tutorial with crap an experienced person would need to know.
I have searched for scraping tutorials for the last one month, but this is the BEST .Thanks so much
I can teach you web scraping form the basics to advanced......if that may help you can reach to me
@@japhethmutuku8508could you help me please bro
@@japhethmutuku8508please help me
This is crazy to see your videos again being recommended :o
it has been years since I saw your last video!
Long time no see.
This may be useful for tracking stock for a PS5/Xbox/Switch/GPU in these times.
Even a Switch is being scalped?
I heard about PS5, Xbox Series X|S, GPUs but not about the Switch itself.
I actually needed this!
currently planning for my computer science A level project and wanted to learn what this web scraping thingamejiggy was all about
this video was an amazing introduction! simple, clear, but not over proffessional
didn't leave me feeling overwhelmed, and i'm going to watch more of your tuts now, cheers mate!
Error: "No module named bs4"
Facing the same, were you able to fix it?
dude where were u?
So far in my life, this has been the smoothest learning process I have ever experienced. Thank you kind sir!
This channel used to have like 100k views. Now its down to just less than 10k. Idk why. When I was around 13, I wanted to make an fps game and found his video to be very interesting. I follow this channel since then. Tinkernut was the reason I started learning programming. After watching his HTML tutorial (create a website from scratch). Even though I neither have com-sci degree nor working as a programmer, I'm still learning python during my freetime. Thank you Daniel.
Yeah poops yeah lol iaooapaoopp lol oowss d’s aIA
I swear to god you are the best!
I know see why youtube dont recommend great videos. Its because youtube dont want people to study tech!!
its not working with opentable
Need more advance lessons on scraping.
What if the data you are searching for is obtainable but is on separate pages within a given site.
Hey man, this is great!! Happy to another video from ya!
Just the inexpensive project I needed.
Your guide amounts to "download this library that does it for you" which doesn't really teach how the process at hand works. Simplicity at the cost the utility and educational value.
Thanks for this tutorial, Looking forward to the next part.
The legend is back!
I love that you used a Raspberry Pi in this tutorial. It's amazing to mess around on and do little experiments.
dude, that intro proves you have a bright future in infomercials!
I grew up in the early youtube days. I was a enamored by the computers knowledge that I could only get from channels like Tinkernut. There really was no schools that offered nuanced coding/web lessons when I was growing up. It wasn't until I went to college and got my degree in Computer Science that I'd be able to build a foundation in computational theory and all sorts of other fun subjects related to computers.
Thanks for helping me along the way to that journey, Tinker!
Beautiful tutorial, exactly what I've been looking for. Thanks a lot, Man!
What is line 10 "w"? I am getting NameError: name 'scraped_quotes' is not defined
You probably have a typo
Running it with my code from github works fine github.com/gigafide/basic_python_scraping/blob/main/basic_scrape_csv_export.py
Thanks for the vid! After a VERY VERY long time i'm getting back into casual coding and looking to casually make some scraping info programs for games with the option to select which info the person wants to see.
So if the site allows scraping would it be better to have my app in progress be independant, have checks done once a minute or every dive minutes? Or have the info scraped, processed and posted on a site i create and retrieved for ppl using the the app? That is if i start shareing the app. My concern is annoying the site owners by checking too often, forgive me if its a silly question, i'm not experiance with scraping.
This is not so easy on windows. Im a beginner at this, but it keeps giving me the "ModuleNotFoundError: No module named bs4". I have spent hours online trying to figure this out.
Awesome 🔥 bro. Can you make a tutorial about tunnelling and vpns
Sure can! I made them both a few years ago ;-) Just search my channel
cant call scraping illegal, thats like saying you cant film in a public place, if they dont want it to be interacted with they can pull it from the web....where it is....in the view of the public...
Ok, so this is amazing, thank you! How would you generalize a scraper like I want to scrape all the news sites in the world and extract the main articles?
wooooow it's been years that i didn't see a video about tinkernut. i think about 10 years ago i learned sql and php with your tutorial about making a webpage with users passwords etc.
man so nice to see a video of you.
Thanks, this was very good, can you share any link where you have done the same for teh website which require username and password, can you please share the same, thanks a ton
well explained, ty
I use IDLE, but for soup reason in the 'soup.findAll' function it says 'nameerror - name 'soup' not defined' :(
Fixed 🤦♂
Can websites detect scraping? If so, how do i escape the dutch AIVD
Yes, they have their ways to detect automated requests, but what they do when they detect "bots" is up to each website.
yes and no, you can check for things like user agent string or try run javascript or something like that, however its actually a really hard problem to solve because a scraping script can look indistinguishable from a browser ..
So glad to see you posting again! I missed your videos so much. I believe my first video of yours was either How to Setup a Webserver or How to Make an Operating System. Both excellent videos!
I would give you 2 likes if i could
I’m so sorry but I used vscode and I can’t find the csv file please how do I go about this?
The code didn't create any csv file although I didn't get any error ! why is that?
Start from 1:17
when I write to csv file for some reason there is always one free row (with literally nothing) between the actual rows with data
Are you still gonna make the next video showing how to access sites that require a login?
This is nice! Now, I just want to know how do I know if the page I want to scrap allows it?
I always end up back here when I need a refresher on scraping ❤ thank you!
Best yotuber.
Any idea on how to identify whether website owners allow data scraping or not?
Great content, thank you. How do we know if a website has a problem with scraping without actually going to jail and having to explain why I destroyed everything in our life to my wife? (Please your honor, I NEED the death penalty for this heinous crime. DO NOT LET THAT WOMAN BAIL ME OUT!)
It feels like api requesting for JavaScript
Every python coding video is just some dood typing out a bunch of random gobbledygook and never explaining how to find the libraries, keywords, and functions in the first place... Why? Is this maybe intentional? Keep everything mysteriously vague.. I dunno 🤷🏼♂️
Which sites are you NOT allowed to scrape?
I'm only giving a good comments bc my gf told me too,
Good video👍
this seems so refreshing? Why did he stop uploading?
Fantastic video. Short and useful 👍
This is not all you need to know. If the page you're scraping dynamically loads content through javascript, you should use selenium to render that before parsing the HTML.
I had no clue it was this easy, but how do I find out which websites I'm not allowed to scrape? All I get from Google is ways to prevent scraping on my own website (which I don't have, but that's beyond the point).
The really dry jokes are surprisingly pleasant.. who could scrape the web without a web? What do you think all the spiders think about that?
Davy504 fan? "Scrape it..." Just kinda reminded me of the ol' "SLAP IT!" line. lol
Awesome video! the code didn't run for me using findALL, but it worked with this...
quotes = soup.find_all("span", attrs={"class":"text"})
authors = soup.find_all("small", attrs={"class":"author"})
Thank you so much for this!
Great video. With the phrase "web scraper", I can't help but picture a function that returns a digital box chevy with candy paint, 26" chrome rims, tinted windows, and triple 15" subs in the trunk with some Too $hort going. I hope someone else from Northern California is thinking the same thing, and cracks up seeing this.
But thank you for your fantastic educational video! cheers.
how much more difficult is it if I want all sub-pages where you would normally find more information?
I just checked a website I want to scalp in a future, but this will be significantly more difficult. I want to get live train schedules but to the live data is inside Java-Script pop-up window.
You might need to use dedicated tools for that, maybe things like Selenium or something related could help you with that.
OMG! your channel is still alive! i remember 8yrs ago i made a keylogger with the help of one of your videos
Thanks a lot for this clear video! How would I retrieve more information associated with the quote? For instance I would like to receive and print both the author and the associated tags.
Love your videos, I don’t understand much of the content, but what’s the difference between taking these quotes via code and just copy pasting into a excel sheet? I’m a noob sorry
You can do it automatically every X amount of time.
You can use a "bot" to do something with that data you scraped.
I don't use Excel, but if you're talking about what I am thinking, Excel is doing exactly what was talked on this video; web scraping.
The thing is that Excel is doing it for you without the need of you programing it first, but that web scraping it does is very, very limited to what tools made for scraping can do.
In practice? Nothing is different, you get the same result. However, let's say you have a website with 2000 quotes and you need to keep a sheet up to date. That's where a scraper would be useful, as its time you really only need to spend once, plus, at that kind of scale it would be faster to write the code than do it manually.
@@JoaoPedro-ki7ct thank you!
I was given a task in my internship that involved web scraping and this was very helpful, thank you!
Funny how it's titled Beginners Guide to Scraping and once he's done with the introduction and starts typing a bunch of codes that " beginners" have absolutely no clue how to do... Thanks, man great help!
what if I want just the first quote?not all
this is entertaining the first thirty seconds lol
Thanks! Super basic but it was what I needed to make my code start working!
Is it `quote.text` because in the html, we see itemprop = "text"? If (for example) the html were instead `“The end is only the beginning.”`, would we rock with `quote.banana`?
Man... I've seen other web scraping tutorials and they take you ten miles down the road and through all types of advanced garbage at you. Granted, I know what you have shown here is the quick and easy way, but that's all I have wanted to get an understanding of, what it is, and how it basically works. Thank you.
Honestly this is just what I needed 😭
Amazing video to get you started with scraping, thanks!
Very practical and helpful video with very detailed explanation!
Thank you for the video
it helped me to understand how scrapper works
great video! seems very straight forward and easy to follow. I will be trying it out in the next day or two
it's a coincidence that I have a task to scrape data and format it to CSV then send it to email. thank you for this tutorial, sir.
Helpful indeed, thanks!
you owe me bro. i just subscribed to your channel😂😂
Last time i did something like that i used a line mode browser to flatten the webpage.