Love you man, i saw like 10 videos about APIs request and data pull. All of them with over 1M viewers.. and they all just show how to code the request and not anything else, which is equal to nothing, what a waste of time. This video is simple, concrete, and right to the point. You surely deserve more viewers.
I totally get what you mean. It's like most videos are like we're going to learn the order of operations and their example is 2 + 2 * 3 and this guy is like let's look at 2 + 2 * -(6 / 2 +2 / -8 * 4 ** 2) / 1 * 2 - 1/1
This should be the first video recommended when you search for making API requests. Went through the video quickly but will be watching slowly tomorrow while coding alongside the video :)
I have to say that you have given on your channel the most comprehensive insight in to scraping of any person on this platform. For that I must say a very big thank you.
Love the way you have started from scratch and have gradually paced up to the pagination part. Very short and crisp way of teaching, much appreciated! 🙌
Great video John, very helpful and thanks for sharing. Maybe one suggestion to think over . The little pip of your face is in de right hand down corner. But most of the output of the terminal is also down under so (and I have noticed in many of your video's) the output is often hidden behind that small screen. Maybe it would be better to put it (the PIP) in the top righthand corner. Anyways, thanks for the effort of making these video's. In the past I had to search in YT for solutions for my questions but now since I found your channel, all the(my) answers are here in one channel.
I wish I had watched this video before, it sure would've had helped me. The other two videos about API that you did, it was helpful to, but they were too short. I like long videos where you explain everything details by details.
Love ur stuff John, keep going. ATM dealing with an api that needs a bearer token, that changes daily. Am stumped ATM, need to figure that out otherwise I cant print price no more 😪
I get an error for the last for loop and using the 'mainlist' TypeError: 'NoneType' object is not iterable. I don't really understand why I'm getting that, I followed everything the same and it all worked up until the last part.
What are you using that your API is so much more readable? When I click on the link, I get a continuous line of text, not an organized block of text divided into different categories.
very nicely put together; I have been working api's using python and this video helps. One thing I always been trying/looking for is that, is there a way to create a generic python project to handle any api rest calls? I am aware that each api is customized by the api provider but just still trying to come up with something generic. Any thoughts/suggestions on this lines?
How can I scrape an html where is no json file and everything is under a button to display hidden elements. The issue is that is no json unless I press the button on main website. Is there a way to look for that button with phyton code except using selenium webdrive?
What browser are you using that shows the number items in results? I tried Brave (based on Chromium) with a prettier JSON extension, and even Firefox, but when I collapse results, I don't see the number of items. TIA!
I am a complete beginner here. Just to understand what you did in laymans term. You basically mirrored the website into python and extracted information from it. Is this correct? And somehow, you also made the data in JSON format for ease of interpretation by Python.
Thanks @@JohnWatsonRooney . I just want to be clear on this. If my transaction data api has customerID (int) and my customer data has customerID and I want to pull the transactions and include the customer name, not ID, I would use two URLs. Would I then have if tdata['customerID'] = cdata['customerID'] print cdata['customername']?
@@JohnWatsonRooney Hey John since you are online may I make a request? Can you do something with a rest api and specifically querystringparameters? Like GET requests with 3 or 4 parameters? I have not seen a good tutorial online on the topic.
Thank you for this video, It is very help as it solves most of the challenges with my use case. The only challenge is that, I needed to save the raw JSON data I to the data. How d you suggest I go about it?
2) def custom_operator(numbers): length = len(numbers) output = [2 * numbers[0]] # Double the first number
if length > 2: for i in range(1, length - 1): result = sum(numbers[:i] + numbers[i+1:]) * numbers[i] output.append(result) output.append(2 * numbers[-1]) # Double the last number return output
corrigi a 3 para isto (para incluir math. e statistics.). Só que agora dá um erro a dizer: "'float' object is not iterable" import math import statistics def normalization(list): newList = [] if len(list)
Awesome!! What if for each api call you get only 1000 records with 100 records per page and you need to get 20k records from your api in total. How to handle such cases?
Hai John, I need the data scraping form tha Google results with number of results and multi threading concept. If you make a video for this. It's will helpful for me
def parse_json(response): return [{'Name':item['name'],'No_Episodes':len(item['episode'])} for item in response['results']] we could have used dict/list comprehension
Hi John: I love your channel !!!! Fantastic! Keep it up! I have one question: in minute 11 when you are building def get_pages(response): instead of pages = data ['info']['pages'] you put pages =response ['info']['pages'] . You replace data with response. My question : response = to what ? You mean: response = r .json ? why? thankyou for your help!
The "response" is the json data that i am taking from the first function and passing into the second one. This function just parses the data, so we need to give it that data
@@JohnWatsonRooney Thanks John for your quick reply. I understand your explanation. And now I see that here response = main_request(baseurl, endpoint).
3) def create_dictionary_from_csv(filename, separator): with open(filename, 'r') as file: lines = file.readlines() keys = lines[0].strip().split(separator) dictionary = {} for key in keys: dictionary[key] = [] for line in lines[1:]: values = line.strip().split(separator) for i in range(len(keys)): dictionary[keys[i]].append(values[i]) return dictionary
Example, in the API End point preview shows {"id":1, "displineID":2}, {"id":2, "displineID":1}, {"id":3, "displineID":3}, {"id":4, "displineID":1}, {"id":5, "displineID":2} I want to find 1. Get id numbers of the "displineID": 2 only 2. Get id number of the first "displineID": 2 only 3. Get id number of the second "displineID": 2 only How can I write this in Python? If you have a video or coding relate to this, would you share the link please? Postman Preview [ { "id": "129", "audioRecordingUrl": null, "disciplineId": 2, "howObtainedId": 8, "teamId": 20, } ] }, { "id": "128", "audioRecordingUrl": null, "disciplineId": 3, "howObtainedId": 8, "teamId": 19, ] },
1) def square_cross(num): if num < 4: return "The minimum size is 4" if (num % 2) == 1: return "Please provide an even number" solution = "" for i in range(num): if (i == 0): solution += "*" * num + " " elif (i == (num - 1)): solution += "*" * num else: solution += "*" + (num - 2) * " " + "*" + " "
Thanks for linking this video John!
No worries hope it helped you!
@@JohnWatsonRooney It sure does!
Love you man, i saw like 10 videos about APIs request and data pull. All of them with over 1M viewers.. and they all just show how to code the request and not anything else, which is equal to nothing, what a waste of time. This video is simple, concrete, and right to the point. You surely deserve more viewers.
I totally get what you mean. It's like most videos are like we're going to learn the order of operations and their example is 2 + 2 * 3 and this guy is like let's look at 2 + 2 * -(6 / 2 +2 / -8 * 4 ** 2) / 1 * 2 - 1/1
This should be the first video recommended when you search for making API requests. Went through the video quickly but will be watching slowly tomorrow while coding alongside the video :)
I have to say that you have given on your channel the most comprehensive insight in to scraping of any person on this platform. For that I must say a very big thank you.
Thank you very kind
love your way of teaching, straightforward with no fluff, thank you!
Thanks!
Love the way you have started from scratch and have gradually paced up to the pagination part. Very short and crisp way of teaching, much appreciated! 🙌
Thank you, very kind!
Best Yt chanel. This knowledge only uses to be available on paid content. So thank you so much :)
Thanks 😊
I am just starting to learn Python programming and I can say that you are my best and favorite Teacher! Thank you!
Wow, thanks!
This is the best tutorial on APIs that I have watched. It's excellent!
Glad you think so!
Best explanation I have ever heard. With your help I was able to figure it all out. thanks for what you do.
tremendous experience I 've completed this tutorial & clear crystal for each topics ,thank you for taking much more efforts
Glad it was helpful!
Best tutorial on this topic i found so far! Thanks man!
Thanks!
Extremely helpful. Cannot thank you enough for this.
i was able to do it, this is a great video!
lots of respect and love to you man!🌻
This was wildly useful. Thank you for sharing your knowledge with the plebs.
I loved this thank you so much. Your explanations were perfect , detailed yet to the point!
Nice one John! Love working with an API me. This is great :)
Thank you!
Awesome sir. Your way of teaching is so good.
The Zorr of Web Scraping..
Thanks 😊 giving this to us
i had some confusion about API's but you just made it look very simple
This was a very helpful walk-through - Thanks for the great content! :)
Thanks for watching!
What a great presentation, thnx John
WOW! That was such a clear explanation.
It the best video to understand the how to make an API in python.
Great video John, very helpful and thanks for sharing.
Maybe one suggestion to think over .
The little pip of your face is in de right hand down corner.
But most of the output of the terminal is also down under so (and I have noticed in many of your video's) the output is often hidden behind that small screen.
Maybe it would be better to put it (the PIP) in the top righthand corner.
Anyways, thanks for the effort of making these video's.
In the past I had to search in YT for solutions for my questions but now since I found your channel, all the(my) answers are here in one channel.
Good suggestion thank you, I will check goign forward not to block the terminal output and move my camera picture if needed!
I wish I had watched this video before, it sure would've had helped me. The other two videos about API that you did, it was helpful to, but they were too short. I like long videos where you explain everything details by details.
Great, I’m glad this one was able to help you, thanks for watching
Fantastic, very good explain it. You got a new subscriber
Really neat an well structured Video and Code! Thank you kind Sir!
Glad it was helpful!
ok thanks again for your video. Now I am getting a better idea of scraping through api.
watched this video because of learning english :) thank you for clenan english
Thanks John! I learned a lot with your video!
learing about api's and found your video. great project! easy to follow along and replicate. thank you!
Thank u! It was exactly what I was looking for
nice well structured video.
Awesome explanation! Thanks a ton.
20:07 I had a problem with pandas being unrecognized, but i fixed it by using pip3 install pandas
instead of
pip install pandas
Hope this helps!
On what OS?
Windows
thank you for the great tutorial on pagination
Great and informative video, thank you so much!
Simple and superb!!!
Thanks
Useful tutorial. Thanks.
Hello, thank you for the video.
I wanted to ask what extension/program you are using to make the API's more readable in your browser. Thank you
Thank you, I will make this as my reference :)
always support with full view, like your video as a thanks to your teaching. Looking forward for your sharing more in SQL and APIs :)
Thank you!
great video!!
I learned a lot. Thank you very much!
This video is very informative
awesome video :) thank you
wow nice content exactly i am looking same nice explanation
Thanks!
Thank you for your video.
Thank you for a great presentation
Thanks!
Excellent! Thank you so much.
awesome tutorial sir
Like your video! Thanks
Wonderful tutorial ever : ) thanks
Very important subject in web scraping. Need more tutorials on it if possible. Thanks for adding value.💖👌🌹
Amazing content. Is there any video to work api with postman?
Love ur stuff John, keep going. ATM dealing with an api that needs a bearer token, that changes daily. Am stumped ATM, need to figure that out otherwise I cant print price no more 😪
I get an error for the last for loop and using the 'mainlist'
TypeError: 'NoneType' object is not iterable.
I don't really understand why I'm getting that, I followed everything the same and it all worked up until the last part.
Thanks a lot for this video John 😀
Thanks for watching!
Where does 'response' come from in the second function that was created?
great video, thank you
Thanks.This is great :)
Well done sir
Amazing!!!
Thanks!
What are you using that your API is so much more readable? When I click on the link, I get a continuous line of text, not an organized block of text divided into different categories.
I’m the browser? Firefox does it by default- or the app I use is called insomnia to make api requests
@@JohnWatsonRooney I'll check those out, thanks John!
I have a question. When you are using a list for organizing data into pandas, how do you specify an area that is below episodes?
great video thnks!!!
very nicely put together; I have been working api's using python and this video helps.
One thing I always been trying/looking for is that, is there a way to create a generic python project to handle any api rest calls? I am aware that each api is customized by the api provider but just still trying to come up with something generic. Any thoughts/suggestions on this lines?
How can I scrape an html where is no json file and everything is under a button to display hidden elements. The issue is that is no json unless I press the button on main website. Is there a way to look for that button with phyton code except using selenium webdrive?
Thanks a lot
Which theme he is using please tell me
I'm pretty sure its Gruvbox Material
What browser are you using that shows the number items in results? I tried Brave (based on Chromium) with a prettier JSON extension, and even Firefox, but when I collapse results, I don't see the number of items. TIA!
I am a complete beginner here. Just to understand what you did in laymans term. You basically mirrored the website into python and extracted information from it. Is this correct? And somehow, you also made the data in JSON format for ease of interpretation by Python.
John, would you know where I can find information on calling two APIs at once and using that data with Python?
Also, thanks for this video
hey thanks. if you wanted to call 2 different API's you can do it the same way from the same script, just change the URL
Thanks @@JohnWatsonRooney . I just want to be clear on this. If my transaction data api has customerID (int) and my customer data has customerID and I want to pull the transactions and include the customer name, not ID, I would use two URLs. Would I then have if tdata['customerID'] = cdata['customerID'] print cdata['customername']?
good content good style i sub'd thanks John
Thanks glad you like it 👍
@@JohnWatsonRooney Hey John since you are online may I make a request? Can you do something with a rest api and specifically querystringparameters? Like GET requests with 3 or 4 parameters? I have not seen a good tutorial online on the topic.
Thank you for this video, It is very help as it solves most of the challenges with my use case.
The only challenge is that, I needed to save the raw JSON data I to the data.
How d you suggest I go about it?
Hi John, great explanation!. The process is the the same when I have a page_token instead of a page number?. Thanks
Is there any way i can support this channel....i really really want to ...what a kife saver
Thanks a lot just watching means a lot to me!
2)
def custom_operator(numbers):
length = len(numbers)
output = [2 * numbers[0]] # Double the first number
if length > 2:
for i in range(1, length - 1):
result = sum(numbers[:i] + numbers[i+1:]) * numbers[i]
output.append(result)
output.append(2 * numbers[-1]) # Double the last number
return output
Thanks 👍
What is your career advice John?
corrigi a 3 para isto (para incluir math. e statistics.). Só que agora dá um erro a dizer: "'float' object is not iterable"
import math
import statistics
def normalization(list):
newList = []
if len(list)
Great Video and an awesome teaching. But what if your api allows you to make asynchronous call? How should we handle pagination in such cases?
Hi sir u do with some external file like csv or excel file sir n read from excel and write back to that excel or csv file sir
Use pandas. You're welcome!
Awesome!! What if for each api call you get only 1000 records with 100 records per page and you need to get 20k records from your api in total. How to handle such cases?
Hai John, I need the data scraping form tha Google results with number of results and multi threading concept. If you make a video for this. It's will helpful for me
Suggest a way to return 2 million records from a paginated API when the limit is set to 10k records. What should be the approach using python?
thank you thank you thank you
def parse_json(response):
return [{'Name':item['name'],'No_Episodes':len(item['episode'])} for item in response['results']]
we could have used dict/list comprehension
Hi John: I love your channel !!!! Fantastic! Keep it up!
I have one question:
in minute 11 when you are building def get_pages(response):
instead of pages = data ['info']['pages'] you put pages =response ['info']['pages'] .
You replace data with response.
My question : response = to what ?
You mean: response = r .json ?
why?
thankyou for your help!
The "response" is the json data that i am taking from the first function and passing into the second one. This function just parses the data, so we need to give it that data
@@JohnWatsonRooney Thanks John for your quick reply. I understand your explanation. And now I see that here response = main_request(baseurl, endpoint).
Thanks. Is it possible to share python file.
3)
def create_dictionary_from_csv(filename, separator):
with open(filename, 'r') as file:
lines = file.readlines()
keys = lines[0].strip().split(separator)
dictionary = {}
for key in keys:
dictionary[key] = []
for line in lines[1:]:
values = line.strip().split(separator)
for i in range(len(keys)):
dictionary[keys[i]].append(values[i])
return dictionary
any links to the code
3 corrigida!!:
def normalization(list):
newList = []
if len(list)
Example, in the API End point preview shows {"id":1, "displineID":2}, {"id":2, "displineID":1}, {"id":3, "displineID":3}, {"id":4, "displineID":1}, {"id":5, "displineID":2}
I want to find
1. Get id numbers of the "displineID": 2 only
2. Get id number of the first "displineID": 2 only
3. Get id number of the second "displineID": 2 only
How can I write this in Python? If you have a video or coding relate to this, would you share the link please?
Postman Preview
[
{
"id": "129",
"audioRecordingUrl": null,
"disciplineId": 2,
"howObtainedId": 8,
"teamId": 20,
}
]
},
{
"id": "128",
"audioRecordingUrl": null,
"disciplineId": 3,
"howObtainedId": 8,
"teamId": 19,
]
},
3
def normalization(list):
newList = []
if len(list)
Brother, please try to maintain playlists according to the Library name.
1)
def square_cross(num):
if num < 4:
return "The minimum size is 4"
if (num % 2) == 1:
return "Please provide an even number"
solution = ""
for i in range(num):
if (i == 0):
solution += "*" * num + "
"
elif (i == (num - 1)):
solution += "*" * num
else:
solution += "*" + (num - 2) * " " + "*" + "
"
1a parte do 4 corrigida -> no str_(self)
class Car:
def _init_(self,plate_number,fuel_available,fuel_full_cap,fuel_type):
self.plate_number = plate_number
self.fuel_available = fuel_available
self.fuel_full_cap = fuel_full_cap
self.fuel_type = fuel_type
def refill(self,liters):
if self.fuel_available + liters > self.fuel_full_cap:
self.fuel_available = self.fuel_full_cap
else:
self.fuel_available += liters
def _str_(self):
return "{}
Fuel available: {}
Fuel type: {}".format(self.plate_number,self.fuel_available,self.fuel_type)
4
class Car:
def _init_(self,plate_number,fuel_available,fuel_full_cap,fuel_type):
self.plate_number = plate_number
self.fuel_available = fuel_available
self.fuel_full_cap = fuel_full_cap
self.fuel_type = fuel_type
def refill(self,liters):
if self.fuel_available + liters > self.fuel_full_cap:
self.fuel_available = self.fuel_full_cap
else:
self.fuel_available += liters
def _str_(self):
return "{}
Fuel Available: {}
FuelType: {}".format(self.plate_number,self.fuel_available,self.fuel_type)