I'm not sure the specifics of your situation but usually ng prefixes mean the site is using angular. Javascript SPAs can be difficult to scrape as the javascript is not run and content may be missing, you may want to explore other routes such as seeing if there is a web request pulling the content in after the page loads and seeing if you can scrape directly from that.
Mine just ends up with this: print username + ' ' + uploads + ' ' + views ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
@Lenovo P70 Phone also I tried this and worked for me. It took me while to figure this out: file = open('UA-camr.csv','w',encoding='utf-8',newline=' ') writer.writerow([username, upload, views]) So, basically don't use the .encode
I'm not able to convert imported data into number format from text format, having issues with vlookup due to this. Any quick fix or video that you have on it? I've already tried quite a few options but no success.
Looks like doing Text to Columns in Excel is what you want my friend: support.office.com/en-us/article/Convert-numbers-stored-as-text-to-numbers-40105f2a-fe79-4477-a171-c5bad0f0a885
heyi just got this error could please find solution to this error:-Traceback (most recent call last): File "main.py", line 28, in writer.writerow(['ProductName','Price']) TypeError: a bytes-like object is required, not 'str'
using ur code i'm getting error in this line for row in rows: for row in rows: ^ SyntaxError: unexpected EOF while parsing ps: i'm using Spyder anaconda v3.7
You don't need to download it, it comes with Python. If you're using version 3 (hopefully you are at this point) you can use urllib. Check the docs because it may have a slightly different interface: docs.python.org/3/library/urllib.html
@@syntaxbyte import requests, csv, time, webbrowser import re import row as row from bs4 import BeautifulSoup from selenium import webdriver user_input = input("enter something to search:") print("googling...") google_search = requests.get("www.google.com/search?q=" + user_input) soup = BeautifulSoup(google_search.text, 'html.parser') # print (soup.prettify()) csvFile = open('juliuskoch.csv', 'wt') writer = csv.writer(csvFile) # write header row writer.writerow(['website']) for link in soup.find_all('a', href=True): website=print(link.get('href')) writer.writerow([website.encode('utf-8')]) csvFile.close() This is my code I am getting error
Hi , in case of I using python 3 so what about urllib? Which one i need to work ?
This video is awesome. Fast and straight to the point, this helped me to make a script for a company, thanks. Subscribed
trying to scrape data from Indeed's job postings and your video helped a bunch. thanks for the help my dude
Thanks, Holland.
Not gonna lie, you taught this better than my professor did. Thanks man!
The video is really good, Mr Peter Parker
What to do when there is ng_content in class. For when I print it out nothing comes
I'm not sure the specifics of your situation but usually ng prefixes mean the site is using angular. Javascript SPAs can be difficult to scrape as the javascript is not run and content may be missing, you may want to explore other routes such as seeing if there is a web request pulling the content in after the page loads and seeing if you can scrape directly from that.
Mine just ends up with this:
print username + ' ' + uploads + ' ' + views
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
Thanks, Peter P. :)
How can we create another sheet (tab) within the same csv file.... Can u please share that
I am getting an error Attribute:"None type" object has no attribute 'find_all'
How to solve this?
I'm using the requests lib instead of urlib2 and social blade is blocking me with cloudflare
any tips?
At 17:39, i'm getting an error after i type the line saying:
TypeError: a bytes-like object is required, not 'str'
I have exactly what you have
@Lenovo P70 Phone sir i have error is 'NoneType' object has no attribute 'find_all'
@Lenovo P70 Phone sir please share your write code
Please sir 🙏🙏🙏🙏
@Lenovo P70 Phone also I tried this and worked for me. It took me while to figure this out:
file = open('UA-camr.csv','w',encoding='utf-8',newline=' ')
writer.writerow([username, upload, views])
So, basically don't use the .encode
I'm not able to convert imported data into number format from text format, having issues with vlookup due to this. Any quick fix or video that you have on it? I've already tried quite a few options but no success.
Looks like doing Text to Columns in Excel is what you want my friend: support.office.com/en-us/article/Convert-numbers-stored-as-text-to-numbers-40105f2a-fe79-4477-a171-c5bad0f0a885
from the side you almost look like Tom Holland 😁
How do we scrap from a inspect element disabled page
heyi just got this error could please find solution to this
error:-Traceback (most recent call last):
File "main.py", line 28, in
writer.writerow(['ProductName','Price'])
TypeError: a bytes-like object is required, not 'str'
yeah i had the same error, appearantly we put in the header as strings, but py wanted it as a bytes. its been a year, do u got any solutions to that?
Sir, which software do you used to execute??
Yoo I don't know Spider-Man could teach you about programming
Nice tutorial man...Greatjob
hi can you upload video how to get information from html inside the hyperlinks
I'm getting this error: AttributeError: module 'urllib3' has no attribute 'Request'
Two changes might help you.
Import requests
request = requests.get(url)
Thx a lot. It really helped
Nice ! Thank You
you deserve my like
type error: a bytes-like object is required, not 'str, hailp
Remove the b in wb in the file open
helped indeed! thanks
men god bless you, just one thing, did you know why some letters doesnt appear, i mean, i put the UFT-8 but some code missing
using ur code i'm getting error in this line
for row in rows:
for row in rows:
^
SyntaxError: unexpected EOF while parsing
ps: i'm using Spyder anaconda v3.7
Did you indent the next block of code correctly? The next line should be indented.
Superb👍👍
thanks, sir It's really good tutorials. If you give me more python web scraping tutorials then good for me
Hi SyntaxByte, the link for the python book is broken, can you update? Cheers on the video, super clear and great explanations !
Awesome
amaaaaaaaaaaazing
Struggling a bit to actually read your code. It's very small even on full screen.
Which is exactly why there is a link to it in the description.
how to install urllib2?
You don't need to download it, it comes with Python. If you're using version 3 (hopefully you are at this point) you can use urllib. Check the docs because it may have a slightly different interface: docs.python.org/3/library/urllib.html
@@syntaxbyte Thanks, I can try to do scraping now :)
if tom Holland could code
ModuleNotFoundError: No module named 'urllib2'
What version of python are you using?
@@syntaxbyte 3.6
do you have requests installed?
pip install requests
@@fawadh Use urllib3. That's what I had to do.
Give the guy a beer. Thanks for the tutorial.
Hello handsome guy :D
3:29 for a great slurp
6:57 for a greater slurp
u look like tom holland
just Spiderman (Tom Holland) WEB scrapping.
He looked like Tom Holland
Liked the video but you gotta stop drinking from that clunky mug.. killer to anyone on headphones.
Lol, good to know, thanks!
waste of time nothing worked
bhai tu pehle coffee pile... na na pile pehele... bakchodi kiye ja raha he
I am getting this error
writer.writerow = website.encode('utf-8')
AttributeError: 'NoneType' object has no attribute 'encode'
How to solve it?
Seems like your website variable isn't initialized properly. I can't know without seeing more code.
@@syntaxbyte import requests, csv, time, webbrowser
import re
import row as row
from bs4 import BeautifulSoup
from selenium import webdriver
user_input = input("enter something to search:")
print("googling...")
google_search = requests.get("www.google.com/search?q=" + user_input)
soup = BeautifulSoup(google_search.text, 'html.parser')
# print (soup.prettify())
csvFile = open('juliuskoch.csv', 'wt')
writer = csv.writer(csvFile)
# write header row
writer.writerow(['website'])
for link in soup.find_all('a', href=True):
website=print(link.get('href'))
writer.writerow([website.encode('utf-8')])
csvFile.close()
This is my code I am getting error
Your indentation on the for loop is incorrect. You need to indent the line that gave you the error so it is part of the for loop