🚨 Make sure you install the same version of selenium I use in the video: pip install selenium==3.141.0 to avoid any deprecation message || 🔥Join my 8-hour Web Scraping course: www.udemy.com/course/web-scraping-course-in-python-bs4-selenium-and-scrapy/?referralCode=291C4D7FF6F683531933
I not a rich man I am a intermedit school student so kindly recommend me a free web scraping course and I shall be great ful to you. and thanks a lot for sending me this link. May ALLAH live you long and happy life.
One of the best videos with amazing teaching quality bro you are far better than many of the professors out there. In this video instead of Spain if i want to select and scrape informations about all the nations and their leagues is there any simple way?
Thank you PyCoach. This video is very helpful. Can you please do us a video on how to scrape reviews from trip websites that are embedded in multiple pages?
i think modern selenium works in this way. all_matches_button = driver.find_element('xpath', '//label[@analytics-event="All matches"]') all_matches_button.click() rows = driver.find_elements('xpath', '//tr') for row in rows: print(row.text)
Frank, the website you are using now has season 22/23 that is blank. You are showing 2021 season that is populated. I am assuming that is why your code works and mine does not. Is there a work-around for this?
Hey man. Loved your video. I have a request though, can you make something that can scrap any type of business at the location I provide on google maps.
Hi Frank - great video for me as a newbie. I managed to get the csv file to create and save to my projects folder but some of the scores are represented as dates in the file, e.g. 02-Mar, 02-Jan. The data in the text editor is fine though. Is this just an issue with Excel saving the score column export as a mixture of 'General' & 'Custom'? Is there any solution to this? Thanks. [I'm using python 3.10 and Atom editor on Windows].
Oh yeah I remember about that. It’s just an Excel issue. You won’t have that problem when reading the file with Pandas. If you want to stick with Excel, you need to change the cell types so the scores aren’t seen as dates.
for some reason the latest version of selenium chose to get rid of the find_element_by_xpath and other methods and use a generic find_element with "id" | "xpath" | "link text" | "partial link text" | "name" | "tag name" | "class name" | "css selector" passed in as the first argument.
If you’re referring to the other video where I scrape with Pandas, then it’s simple. You only have to add a for loop and find a pattern within the urls
I really liked your video, but I have an issue, when I try to run the program it says no such element: Unable to locate element: {"method":"xpath","selector":".//td[3]"} and I don.t know why, hope you can help me
Hi Frank, I have a quick question. Is it possible to write a complicated python script that can interact with different kinds of websites for automation, or do we have to write script for every website different and script must be customized? thanks
after those 5 first lines, i keep getting errors saying executable path has been deprecated. It does open the website but keeps giving me that error in red. Not sure what to do.
Hello, I have a query. How to write one line code for clicking multiple web elements. For example is there a way to write like driver.find_element(By.XPATH, element_1).click().driver.find_element(By.XPATH, element_2).click().driver.find_element(By.XPATH, element_3).click() in ONE LINE?
hello sir I followed your lecture and I am stuck at the one part of execution where it's for loop error displaying that' " WEBELEMENT" is not iterable '. Could you please help about this?
The code is working fine last time I checked. You can add a implicit wait -> time.sleep(5) to let the website load correctly in case you get empty data
Update, I forgot to add ".text" at the end of find element, this was giving me errors -> "find_element(By.XPATH, 'xpath string')" what I needed that fixed the errors find_element(By.XPATH, 'xpath string').text -Rookie mistake, lol great video!
Is there a way to scrape a dynamic website without specifying the div or element as I’m coding it to scrape any dynamic website so each site has different div/element names, how would I scrape any dynamic website
I didn’t quite understand you, but the best way to find an element in dynamic websites is building an XPath. There are functions that you can use inside an XPath to overcome the typical challenges of dynamic websites
@@ThePyCoach so I want to use selenium and beautiful soup to scrape dynamic websites but instead of scraping a specific site by specifying their elements and div I want to be able to scrape any dynamic website. How would I scrape more than 1 site at once since they have different divs and element ids. I want to use beautiful soup and selenium to get all text on more than one dynamic site at once not just focusing on one site.
I need help please. I cant even get past the drive path section in the beginning I keep recieving an error message. SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 0-1: truncated \UXXXXXXXX escape but Im copying as PATH from where I downloaded the Chromedriver would it be because I have Chrome64bit instead of 32 bit? if so how do I change that?
instead of putting the table in a panda dataframe, why don't we just print(td1, td2, td3, td4) and create a list? Is it not easier - what's the point pleasE?
I can't speak for everyone but I used to use both beautifulsoup and Selenium because it was easier for me to remember the syntax of beautifulsoup when writing code and I felt that Selenium wasted more resources when scraping content, so I only loaded the Javascript driven website with Selenium and let beautifulsoup scrape all the data. That said, now I only use Selenium because there isn't anything beautifulsoup can do that Selenium can't.
I have an issue where my chrome driver not automatically go to the web url. How can I make that so when I run my python code , chrome automatically go to url?
I’m not quite sure because I was running Python 3.8 when I made the videos and I haven’t updated so far. Please let me know if everything’s working fine with 3.10
I'm running 3.10.5 and it works just fine. Selenium is a bit slow, so give it 2 minutes after scraping and it will create the .csv file and show you the data frame. GREAT tutorial! Very clear and easy to follow.
That's odd. Maybe it went to the spam folder. Anyway I have 2 forms. If one doesn't work, try the other Form 1: frankandrade.ck.page/d3b1761715 Form 2: frankandrade.ck.page/bd063ff2d3
Depends on which version of Selenium you have. If you have Selenium 3 the methods I use on the video should work, but if you have Selenium 4, you should use another method: .find_element(by=“”, value=“”). Check the cheat sheet for more info
Hi, I am getting this error: WebDriverException: unknown error: cannot find dict 'desiredCapabilities' (Driver info: chromedriver=2.25.426923 (0390b88869384d6eb0d5d09729679f934aab9eed),platform=Windows NT 10.0.19044 x86_64), Can anyone please tell me how to fix this?
try: # If the element is found, continue the loop if match.find_element(By.XPATH, "./td[@data-ng-if='dtc.isUpcomingFixture(match)']"): continue except: # If the element is not found, process the match normally pass add this inside for loop under the for condition,for the loop to ignore the upcoming matches tr and td which might cause an exception
Hello, I have a problem with the attribute 'find_element-by-xpath', he is toll to me the following error : date.append(match.find_element_by_xpath('./td[1]').text) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'WebElement' object has no attribute 'find_element_by_xpath' can you help me please ?
I am getting error 'str' object is not callable. on the line match_date.append(team_match.find_element(By.XPATH('./td[1]').text())). can anyone advise please, I tried it with .text as well but no result
🚨 Make sure you install the same version of selenium I use in the video: pip install selenium==3.141.0 to avoid any deprecation message || 🔥Join my 8-hour Web Scraping course: www.udemy.com/course/web-scraping-course-in-python-bs4-selenium-and-scrapy/?referralCode=291C4D7FF6F683531933
I not a rich man I am a intermedit school student so kindly recommend me a free web scraping course and I shall be great ful to you. and thanks a lot for sending me this link. May ALLAH live you long and happy life.
This is the best selenium tutorial video I've come across so far. Kudos to you, mate!
Thanks!
Agree!
This is the best Selenium tutorial on youtube. Thank you for the tutorials.
Don't give up mate, that was my first day to use soft soft and i will work on it for a long ti!
this made web scrapping with selenium so easy for me Thank you
I've just watched this video. Morever, it's really powerful that helping me understand it. Thanks a bunch
Now I understand how to make loops and export everytNice tutorialng really god bless you.. your way of explaining simply aweso I loved it
your lecture is so very helpful. It is awesome love you from India.
Very nice video and a great introduction to Selenium and web scraping with python in general 🙂
Glad it helped!
YOU ARE A GOD... NO QUESTION ABOUT IT
Exceptional tutorial! congratulations!
you are a legend mate, thank you
tank u for ur help, u saved my life
Thanks for this, helped me understand why .ui and select
Need to understand .common , and .exceptions
Great video! You are a great teacher
Great beginner video man. Appreciate it!
Thanks!!
Great tutorial, thanks man
Thanks!!! Great tutorial!
With some code modification I can completely scrape dynamic website.
Thanks a lot Frank . This video helped lots
This is cool. I've just read your medium article. Keep up the good work
Thanks!!
BRILLIANT!!!
perfect lesson bro!
very good explanation, thank you !
You are super good, thank you sir ☺️
Great video. very very valuable video.
One of the best videos with amazing teaching quality bro you are far better than many of the professors out there.
In this video instead of Spain if i want to select and scrape informations about all the nations and their leagues is there any simple way?
Thank you PyCoach. This video is very helpful. Can you please do us a video on how to scrape reviews from trip websites that are embedded in multiple pages?
i think modern selenium works in this way.
all_matches_button = driver.find_element('xpath', '//label[@analytics-event="All matches"]')
all_matches_button.click()
rows = driver.find_elements('xpath', '//tr')
for row in rows:
print(row.text)
Thank you !. Where do i find out more pls
hanks lot Sir.. You helping us..
Thank you so much for sharing!!!!
Good my brother
Thanks!!
Hey @ThePyCoach its a really nice vide. Can you share the cheat sheet too as the website in description isn't working.
What is my version is 121 of chrome?
Frank, the website you are using now has season 22/23 that is blank. You are showing 2021 season that is populated. I am assuming that is why your code works and mine does not. Is there a work-around for this?
would be interesting to look at Web Scraping Project using the Scrapy framework
There’s one coming soon. Stay tuned!
Thank you so much boss 😊
Hey man. Loved your video. I have a request though, can you make something that can scrap any type of business at the location I provide on google maps.
Hi Frank - great video for me as a newbie.
I managed to get the csv file to create and save to my projects folder but some of the scores are represented as dates in the file, e.g. 02-Mar, 02-Jan. The data in the text editor is fine though.
Is this just an issue with Excel saving the score column export as a mixture of 'General' & 'Custom'?
Is there any solution to this?
Thanks. [I'm using python 3.10 and Atom editor on Windows].
Oh yeah I remember about that. It’s just an Excel issue. You won’t have that problem when reading the file with Pandas.
If you want to stick with Excel, you need to change the cell types so the scores aren’t seen as dates.
If you are using webdriver manager along with selenium then how would you append data?
can this be done in visual studio?
any idea how to click on "a href" elements. web driver wait is being used, the element is present but it only works sometimes.
Back when I used to use soft soft when I knew it kind of well I used the soft roll to make softs I thought it was just more effective
Is there a coupon code for your Udemy Web Scraping course?
for some reason the latest version of selenium chose to get rid of the find_element_by_xpath and other methods and use a generic find_element with
"id" | "xpath" | "link text" | "partial link text" | "name" | "tag name" | "class name" | "css selector" passed in as the first argument.
Thank you.
You're welcome!
How do I scrape from a list of URLs in .csv file instead of 1 URL only?
If you’re referring to the other video where I scrape with Pandas, then it’s simple. You only have to add a for loop and find a pattern within the urls
Much love
Damn! You have a very cool voice.
Hi , thanks for sharing ,but the Python for Data Science Cheat Sheet is not working .
How did you get that "path"??
I really liked your video, but I have an issue, when I try to run the program it says no such element: Unable to locate element: {"method":"xpath","selector":".//td[3]"} and I don.t know why, hope you can help me
Hi Frank, I have a quick question. Is it possible to write a complicated python script that can interact with different kinds of websites for automation, or do we have to write script for every website different and script must be customized? thanks
Why does the GMS tNice tutorialng doesn't soft sa way like it did in the video?
after those 5 first lines, i keep getting errors saying executable path has been deprecated. It does open the website but keeps giving me that error in red. Not sure what to do.
I think it has to do with chromedriver. Download it again and leave it in the path you indicate.
You can also try updating Selenium with pip.
thanks!
You’re welcome!
Hello, I have a query.
How to write one line code for clicking multiple web elements.
For example is there a way to write like driver.find_element(By.XPATH, element_1).click().driver.find_element(By.XPATH, element_2).click().driver.find_element(By.XPATH, element_3).click() in ONE LINE?
hello sir I followed your lecture and I am stuck at the one part of execution where it's for loop error displaying that' " WEBELEMENT" is not iterable '. Could you please help about this?
The code is working fine last time I checked. You can add a implicit wait -> time.sleep(5) to let the website load correctly in case you get empty data
Yeah, same here too. There's a few others in the comments getting this error as well.
Update, I forgot to add ".text" at the end of find element, this was giving me errors -> "find_element(By.XPATH, 'xpath string')" what I needed that fixed the errors find_element(By.XPATH, 'xpath string').text -Rookie mistake, lol great video!
Is there a way to scrape a dynamic website without specifying the div or element as I’m coding it to scrape any dynamic website so each site has different div/element names, how would I scrape any dynamic website
I didn’t quite understand you, but the best way to find an element in dynamic websites is building an XPath. There are functions that you can use inside an XPath to overcome the typical challenges of dynamic websites
@@ThePyCoach so I want to use selenium and beautiful soup to scrape dynamic websites but instead of scraping a specific site by specifying their elements and div I want to be able to scrape any dynamic website. How would I scrape more than 1 site at once since they have different divs and element ids. I want to use beautiful soup and selenium to get all text on more than one dynamic site at once not just focusing on one site.
I need help please. I cant even get past the drive path section in the beginning I keep recieving an error message. SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 0-1: truncated \UXXXXXXXX escape but Im copying as PATH from where I downloaded the Chromedriver
would it be because I have Chrome64bit instead of 32 bit? if so how do I change that?
just put the file in the same folder as your python scripts, much easier.
hello, i don't have the attribute, find_element_by_xpath, i only have find_element and find_elements
instead of putting the table in a panda dataframe, why don't we just print(td1, td2, td3, td4) and create a list? Is it not easier - what's the point pleasE?
why do ppl sometimes use beautifulsoup with selenium? is there anything bs4 can do that selenium can't?
I can't speak for everyone but I used to use both beautifulsoup and Selenium because it was easier for me to remember the syntax of beautifulsoup when writing code and I felt that Selenium wasted more resources when scraping content, so I only loaded the Javascript driven website with Selenium and let beautifulsoup scrape all the data.
That said, now I only use Selenium because there isn't anything beautifulsoup can do that Selenium can't.
I have an issue where my chrome driver not automatically go to the web url.
How can I make that so when I run my python code , chrome automatically go to url?
do you still have this issue?
@@japhethmutuku8508 yes
@@japhethmutuku8508 No. Instead download the chromedriver, I downloaded the Chrome. So it just me that downloaded the wrong file.
does this work on vs code?
Is compatible with python 3.10.0?
I’m not quite sure because I was running Python 3.8 when I made the videos and I haven’t updated so far.
Please let me know if everything’s working fine with 3.10
I'm running 3.10.5 and it works just fine. Selenium is a bit slow, so give it 2 minutes after scraping and it will create the .csv file and show you the data frame. GREAT tutorial! Very clear and easy to follow.
does anyone know where i can get the pirated version of soft soft
Can't get past the first part, keep getting "handshake failed; returned -1, SSL error code 1, net_error -100" when trying to find elements
Hey I didn't receive any mail from your side when I put in my name and email and submit it for the python cheat sheet.
That's odd. Maybe it went to the spam folder. Anyway I have 2 forms. If one doesn't work, try the other
Form 1: frankandrade.ck.page/d3b1761715
Form 2: frankandrade.ck.page/bd063ff2d3
@@ThePyCoach Yes the first one worked for me now, thank you so much!
Cheat sheet is not available anymore... :-(
Unfortunately, all those find_elements do not show up for me...perhaps it's been deprecated?
Depends on which version of Selenium you have. If you have Selenium 3 the methods I use on the video should work, but if you have Selenium 4, you should use another method: .find_element(by=“”, value=“”). Check the cheat sheet for more info
Hi, I am getting this error: WebDriverException: unknown error: cannot find dict 'desiredCapabilities'
(Driver info: chromedriver=2.25.426923 (0390b88869384d6eb0d5d09729679f934aab9eed),platform=Windows NT 10.0.19044 x86_64), Can anyone please tell me how to fix this?
how to get the data from python script and paste it in the search bar using selenium ?
I don't quite understand your question. If you mean type text on the search bar, you can do so by using the .send_keys method
Determination is key, and reframing of tNice tutorialngs you view as complicated.
Can you please upload the cheat sheet again. The link is not working
I've just checked the link and it's working just fine 🤔
👍👍👍
i get a traceback time out error
I just cant scrape all website, only returns me first result... im going crazy omg...
Yeh it cos with everytNice tutorialng. Use the free trial if u cant afford it. The free trial has no limits its the sa notNice tutorialngs changed
soft
try:
# If the element is found, continue the loop
if match.find_element(By.XPATH, "./td[@data-ng-if='dtc.isUpcomingFixture(match)']"):
continue
except:
# If the element is not found, process the match normally
pass
add this inside for loop under the for condition,for the loop to ignore the upcoming matches tr and td which might cause an exception
Hello, I have a problem with the attribute 'find_element-by-xpath', he is toll to me the following error : date.append(match.find_element_by_xpath('./td[1]').text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'WebElement' object has no attribute 'find_element_by_xpath' can you help me please ?
same problem
@@Timantinpoimija I had fined the solution i will give its to you a few moments later
@@zakariyaeaitmouh3216 I just found it myself too
@@Timantinpoimija okey all right
I am getting error 'str' object is not callable. on the line match_date.append(team_match.find_element(By.XPATH('./td[1]').text())). can anyone advise please, I tried it with .text as well but no result
Every time at the instruction "website.get()" i get the selenium web tab and after fully loading that disappeared. Why is it happening?
the dot wont work.. name = n.find_element(By.XPATH, "./html/body/div[1]/div[4]/div/div[3]/div[1]/div/div[1]/div[1]/h3/a")