if he uploaded this today and not in advance and scheduled, youtube takes time to transcode the HD video. his cam probably looks good so far because its likely a DSLR hooked straight into the computer, give it some time and the resolution will be there :)
Dude, you're one of my biggest youtube inspirations, inspired me to REALLY want to learn python better. Love your aesthetic and how you balance nature and tech!
what I love about this channel is the fact that when he sees a good comment about a video idea he makes a video about it, such a great channel I love it
At 5:35, instead of cleaning up the list every time the program is executed, you should do it just once, then save it an an external file, and then import it. Also, at 6:00, instead of choosing a random number within the list index range, and then selecting the element at that index, you can simply choose a random element from the list, by random.choice(bird_list)
Selenium has WebDriverWait and expected_conditions which can check if an element is loaded and then proceed on further.Time.sleep is still a good idea but page load times are inconsistent and variations can occur in network speed.Great video btw
@@VentiFriedChicken if you've time I would suggest you to read Web Scraping with python Oreilly by Ryan Mitchell. It covers most scraping use cases in an intuitive way
@@sanjay50012 awesome! I will look to add it to the collection. I have a project coming up where I will most likely need to scrape to make up for a gap in data 🙏🏾
I really appreciate these sort of videos. Great inspiration, and learning too, for a realtively low level python user like me. In fact, I like that your code isn't optimized and perfect,. You rather show your projects as they are, kinks and all. That lead me to have a great time learning by other experienced programmers in the comment section, who all contributed with different ways to solve the same problem. Keep it up Kalle!
Your videos are awesome! This has motivated me to get on with some automation tasks I still need to finish. Thanks for making all the content you do, it's really helped and inspired me :)
The first thing popped into my head as a data scientist and ml practitioner: "a pretrained RNN(a neural network basically) could generate new names based on existing ones"
Hi Kalle, how would you setup a toolbox of sorts that would allow for multiple tools like this to run as needed? Some sort of corn job that saves the output to a file and run other scraping tasks at the same time?
Good idea, but I have a few remarks: 1. i would call the Birds API only once and then save the list of names in a file. The data is static, so why query the API all the time? 2. selenium has explicit wait functions (instead of time.sleep) 3. i would rather use requests and beautifulsoup4. Selenium is easier in the first moment. But every few weeks my Selenium scripts do not work anymore. Every time Chrome is updated, you have to update the chrome drivers manually... 4. there are already public APIs that allow you to search for (free) domains. Just google for them. I would rather use these APIs than scrape the website? 5. or just use whois or the corresponding Python library?
Try Try putting browser.get(url) in a variable Do something like this webURL = browser.get(url) if webURL: Then you may not have to use the time.sleep function Personally i use import webbrowers webbrowser = webbrowser.open(URL, new=2) if webbrowser: do something seems to work although this module actually opens up a browser i imagine its a similar principle. Basically what happens is the if statement waits to see if the browser is loaded or not and when it is it does something. Let me know if it works
Hey bro, thanks for the video! But I got one question: why are you using sublime text if you have recommended Kite on the video? Which one is better? I’m currently using sublime, should I migrate to kite?
9:00 I had the same issue. In order to solve it u need to import this from the selenium library. They will allow u to execute action after an element is present/clickable depending on the EC.function u call. Here's an example u should be able to paste at line 16-17: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC element_xpath = YOUR XPATH try: button = WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, element_xpath))) button.click() except Exception as e: print(e)
you can just try to resolve the actual domain you want to check and if it comes back as NXDOMAIN instead of scraping godaddy...this method is orders of magnitude faster and can be multi threaded.
I want to know where do you write these python programs, which platform do you use and how you use the command prompt to run these programs. So, can you make a video and explain how to use that. I am just stuck with Jupyter. It will be really helpful to know about this thing
Your print can be optimized. I recommend using Multi-Line print instead of adding multiple ones. ex. print('normal 1') print*normal 2') print(''' multi 1 multi 2 ''')
To be fair, Selenium supports xpath, which can make your selectors much more robust and future proof, although in most cases Scrapy is my preferred option, especially as it's asynchronous, which means it's so fast.
Please use Python Requests for your next python webscraping project. For simple projects it's much more efficient and easy to use. No selenium headaches ect.. Keep up the vids ;)
do I learn java or python? I want to do some http requests, web scrape, and make some simple programs I guess. Java seems so complicated when sending requests.
1. Request library would've been better and faster here. 2. Maybe make it look through 300 domains and everytime it finds an available one, it saves it in a log file. And then maybe sends a push message to the user when done searching.
Great video! I think I would extend it so you're only alerted to available domains that cost less than a certain amount (based on the value within the price section on the website).
I love the content but the resolution of your actual IDE is quite low which makes it hard to read. Surely there's a way to increase that for further videos?
@@BryanJenks You promised me I would get $50 to my paypal if I made that comment on this video but you haven't paid anything yet. If I'm not seeing anything by tomorrow I'm removing it again.
It is very hard to upload video everyday with good editing.
He is one of the most hardworking youtuber
The quality of the desktop record is so low!
if he uploaded this today and not in advance and scheduled, youtube takes time to transcode the HD video. his cam probably looks good so far because its likely a DSLR hooked straight into the computer, give it some time and the resolution will be there :)
@@BryanJenks but it has already transcoded to the HD video even to 4k video :).
Yes, I could not read what was written on the terminal even at 1080p
@@kezy2695 weird... i wonder if it was a rendering issue o_O
Its that his desktop ratio is big and the dimensions of our video format doesn't really match that so they feel a little squeezed
Dude, you're one of my biggest youtube inspirations, inspired me to REALLY want to learn python better. Love your aesthetic and how you balance nature and tech!
what I love about this channel is the fact that when he sees a good comment about a video idea he makes a video about it, such a great channel I love it
At 5:35, instead of cleaning up the list every time the program is executed, you should do it just once, then save it an an external file, and then import it. Also, at 6:00, instead of choosing a random number within the list index range, and then selecting the element at that index, you can simply choose a random element from the list, by random.choice(bird_list)
Great video, would be good to try and improve the desktop recording as it does degrade what you're trying to show. Keep up the good work.
Selenium has WebDriverWait and expected_conditions which can check if an element is loaded and then proceed on further.Time.sleep is still a good idea but page load times are inconsistent and variations can occur in network speed.Great video btw
Awesome critique! As I look to replicate Kalle’s code I will see if I can successfully incorporate this tweak 💪🏾🙏🏾
@@VentiFriedChicken if you've time I would suggest you to read Web Scraping with python Oreilly by Ryan Mitchell. It covers most scraping use cases in an intuitive way
@@sanjay50012 awesome! I will look to add it to the collection. I have a project coming up where I will most likely need to scrape to make up for a gap in data 🙏🏾
I really appreciate these sort of videos. Great inspiration, and learning too, for a realtively low level python user like me. In fact, I like that your code isn't optimized and perfect,. You rather show your projects as they are, kinks and all. That lead me to have a great time learning by other experienced programmers in the comment section, who all contributed with different ways to solve the same problem. Keep it up Kalle!
Believe me the best thing ever happened to this channel is the daily videos especially the automation videos that I like thank you
This is awesome! I just woke up but you motivate me to jump on my computer and code
I second that! Cool and useful project!
Nice idea! Really difficult to read the text on your screen though
Seems that his monitor resolution and recording resolution is different
9:01 Use explicit wait and implicit wait to stop selenium from getting to another function.
Your videos are awesome!
This has motivated me to get on with some automation tasks I still need to finish. Thanks for making all the content you do, it's really helped and inspired me :)
Have you considered using the requests module instead of selenium as it’s a lot faster.
The first thing popped into my head as a data scientist and ml practitioner: "a pretrained RNN(a neural network basically) could generate new names based on existing ones"
Awesome! I always love the automation video’s - keep up the good work!
What code editor is it?
is it possible that you forgot to swap back the original video file instead of the proxies in premiere pro?
hey mate, is UA-cam your full time job now or do you do development work too?
Is there not an API from GoDaddy! where can be used instead of Selenium?
Cool, is there is any Github fork for the code in the video?
Can you add the podcast link ??
Did you have kali linux on the background desktop?
Hi Kalle, how would you setup a toolbox of sorts that would allow for multiple tools like this to run as needed? Some sort of corn job that saves the output to a file and run other scraping tasks at the same time?
You just solved my problem which i was trying to fix this 3 days. THANKS.
just realized i wasn’t subbed to you , am now
i’ve been watching your vids for a bit now i love them thanks
Intro was epic
my fav type of video it is ! love your work bro
at 9:30 you can use browser.implicitly_wait(20)
Wow what was the intro song?
What happend to vim and linux?
Kyle: *Makes automation tutorial*
UA-cam: *plays ad about the war against bots*
can i work on this and update this project in my resumae ?
Good idea, but I have a few remarks:
1. i would call the Birds API only once and then save the list of names in a file. The data is static, so why query the API all the time?
2. selenium has explicit wait functions (instead of time.sleep)
3. i would rather use requests and beautifulsoup4. Selenium is easier in the first moment. But every few weeks my Selenium scripts do not work anymore. Every time Chrome is updated, you have to update the chrome drivers manually...
4. there are already public APIs that allow you to search for (free) domains. Just google for them. I would rather use these APIs than scrape the website?
5. or just use whois or the corresponding Python library?
Yeeees. I love automation. I started learn python after ur videos☺
I'm watching every video to the end. Your work is amazing.
Try
Try putting browser.get(url) in a variable
Do something like this
webURL = browser.get(url)
if webURL:
Then you may not have to use the time.sleep function
Personally i use
import webbrowers
webbrowser = webbrowser.open(URL, new=2)
if webbrowser:
do something
seems to work although this module actually opens up a browser i imagine its a similar principle. Basically what happens is the if statement waits to see if the browser is loaded or not and when it is it does something. Let me know if it works
I'll give this a go thx
how about async/await to resolve this problem ?
5:50 you could actually just do random.choice(birdList) to pick a random object from the list.
Did you do a crawler? I'd like to have my own mini search engine ... ;-) where I can filter results as I need them (using keywords with logic)
why am i getting a problem here: from tokens import get_creds
Crawling over websites using selenium is slow for some cases. But wdyt about using requests-html? It could crawl website with javascript rendered too
why did you stop use flutter and dart ?
Your videos are really helpful and motivating.Thank you.
Hey bro, thanks for the video! But I got one question: why are you using sublime text if you have recommended Kite on the video? Which one is better? I’m currently using sublime, should I migrate to kite?
How can you release a video every day?🤔
Dude I am loving kite
hi kalle can you show us how to do an ecommerce with django?
9:00 I had the same issue. In order to solve it u need to import this from the selenium library.
They will allow u to execute action after an element is present/clickable depending on the EC.function u call.
Here's an example u should be able to paste at line 16-17:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element_xpath = YOUR XPATH
try:
button = WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, element_xpath)))
button.click()
except Exception as e:
print(e)
Totally out xD but which is the watch he is wearing? loosk cool
Very good thanks alot💙💙💙
you can just try to resolve the actual domain you want to check and if it comes back as NXDOMAIN instead of scraping godaddy...this method is orders of magnitude faster and can be multi threaded.
This amazing love your content so far , keep it up 👍
Please explain how can we host our bot so that we don't have to switch on our pc to run it...hope u understand
What should i use Linux for python programming
screen recording blurry :(
9:30 lol i do the exact same thing, but it is quite destructive to do this
Use the statement if len(driver.find_elements_by_xpath(“”) > 0:
If it’s >0 it means the element is available else it has not loaded yet
I want to know where do you write these python programs, which platform do you use and how you use the command prompt to run these programs. So, can you make a video and explain how to use that. I am just stuck with Jupyter. It will be really helpful to know about this thing
Great job Kalle !
I love those automation ideas and does videos
The text is a bit hard to ready on my monitor, not sure if there is something going on with your screen capture resolution. Otherwise, awesome video!
cool input on webscrape-possibilities. plz go on with that stuff ;)
but: is it just me or is the text-rendering off?
Even at 4K the resolution of regions of the video containing text is extremely low :/
Your print can be optimized. I recommend using Multi-Line print instead of adding multiple ones. ex.
print('normal 1')
print*normal 2')
print('''
multi 1
multi 2
''')
You shouldn't use selenium. You should use beautiful soup or lxml. A lot faster and easier to set up.
To be fair, Selenium supports xpath, which can make your selectors much more robust and future proof, although in most cases Scrapy is my preferred option, especially as it's asynchronous, which means it's so fast.
@@ian8502 lxml uses xpath, so that's not really a problem. Both BeautifulSoup and Scrap use lxml under the hood.
Please use Python Requests for your next python webscraping project. For simple projects it's much more efficient and easy to use. No selenium headaches ect.. Keep up the vids ;)
is it me or is the text in the terminal and editor very unreadable.. watching the Video in HD and tried 4K
can i create same start up with javascript
do I learn java or python? I want to do some http requests, web scrape, and make some simple programs I guess. Java seems so complicated when sending requests.
thanks for all the inspiration
That Kite link starts to download their executable when you visit the site, that shitty.
Where are you from, and how old?
This idea is realy good
Can you put the link of the intro music
Hey can you do a video on mitmproxy and how to use it? plz
I have I git hub repo with an application that will ssl pin for you if that’s what you are looking for
@@khushchauhan8891 yeah that will be helpful. thx a lot for replying quickly.
why dont you use wsl
1. Request library would've been better and faster here.
2. Maybe make it look through 300 domains and everytime it finds an available one, it saves it in a log file. And then maybe sends a push message to the user when done searching.
The code in the video is a bit vague. I don't know if other people are the same.
What happened to Vim?
Great video! I think I would extend it so you're only alerted to available domains that cost less than a certain amount (based on the value within the price section on the website).
Kalle your video quality is very good but please make the screen recording quality a bit better,,it is hardly visible in 360p
Love the video but it's hard to read the code.
how to install tokens?
Is this on github?
that intro music never gets old
Having trouble installing the token module anytips guys
Someone know where I can find the source ?
love your video
I am having problems while setting up selenium..can any one help?
I love the content but the resolution of your actual IDE is quite low which makes it hard to read. Surely there's a way to increase that for further videos?
@@BryanJenks You promised me I would get $50 to my paypal if I made that comment on this video but you haven't paid anything yet. If I'm not seeing anything by tomorrow I'm removing it again.
Love the vedios 👍
how about webscrapping with python and then doing Exploratory Data Analysis, and then Visualize the data, and...make a machine learning model
@kalle hallden your videos are inspiring...love them...btw ...I love your intro music also...which music track?
Use implicit wait
where is the code?
your intro song is sick
Can’t read anything on the screen.
Can you make a video on web designing?
okay okay you are looking awesome
woo an upload