Your videos really help me out. Following you from last year and from each of your tutorial i guess that i had learn so much things that makes a postive impact in my engineering career. Like the way when you said Hey BOOM!!!....Amazing Sir!!!
Thanks Paul! As usual, great presentation of a complex subject in a very straightforward manner. The comment by Mr. Peter Wadsworth is something that I would love to see in this channel. The FPGAs are not more expensive than Arduino, but the books could use and author named Paul M.
I folded up like a cheap Walmart lawn chair :( I just don't know enough about self and other class stuff. These lessons are great and I am learning more and more with each lesson. Thanks Paul for these great videos.
Paul, you videos are among the best on UA-cam, in terms of content and your ability to get the subject matter across in an understandable form, looking to the future have you any plans to make any videos/lessons on FPGAs at all?
Still struggling a little with the classes, mostly with passing and retrieving parameters. I had the code almost exactly the same as yours on the face detection class but I failed to return the data in the double bracketed tuple and I couldn't retrieve the data in my main program. I definitely am learning but this part is tough. Thanks for methodically teaching us and reteaching the material so that it sinks in. I feel like I need to go through this whole series again so that it will really sink in.
@@paulmcwhorter it's not my software this time, it's my brain. 🤣 Just need to continue learning until it comes easier. It works with your code. I was super close with my code and I was getting the correct data back from my class but I couldn't figure out how to get the data back into my main program. I understood it after I saw your explanation. That's what you said, struggle with it and then when you see it done it will make sense. You're right. Thanks for continuing to teach.
I use the newer version of mediapipe as they don't have face_detection in the older version anymore. They change the input parameter for the pose and hand in the newer version of mediapipe. You can look at the input parameters by open up pose and hands files in: Your virture Enivironment folder\Lib\site-packages\mediapipe\python\solutions For Pose class, the input parameters are following: def __init__(self, static_image_mode=False, model_complexity=1, smooth_landmarks=True, enable_segmentation=False, smooth_segmentation=True, min_detection_confidence=0.5, min_tracking_confidence=0.5): for hand class, the input parameters are following: def __init__(self, static_image_mode=False, max_num_hands=2, model_complexity=1, min_detection_confidence=0.5, min_tracking_confidence=0.5)
@@paulmcwhorter yes, but in a way it is also good for learning. I like to leave the way how to do it with the new library if you dont mind. 😀 Thank you very much for these lessons. Let me learn a lot about python and AI.
Another great video. But I came unstuck and couldn't get it to work. I went through the whole video again and found bBox=face.location_data_relative_bounding_box when it should have been bBox=face.location_data.relative_bounding_box Then everything worked. It did force me to review the meaning of this statement and where it came from.
I posted a video of my first attempt at an OpenCV / Tkinter Graphical User Interface (GUI) ua-cam.com/video/PuQ8J5vGcWo/v-deo.html Building on the foundation Paul has laid for us. Thanks Paul McWhorter!
I'm Legend. My love for using class has heightened, all thanks to you Paul.
Legend!
Your videos really help me out. Following you from last year and from each of your tutorial i guess that i had learn so much things that makes a postive impact in my engineering career. Like the way when you said Hey BOOM!!!....Amazing Sir!!!
Nice work!
Legend also - the increased frame rate means I had the hands and face modules working together - still at 10fps. Thanks Paul!
Thanks Paul! As usual, great presentation of a complex subject in a very straightforward manner. The comment by Mr. Peter Wadsworth is something that I would love to see in this channel. The FPGAs are not more expensive than Arduino, but the books could use and author named Paul M.
I am legend. Submitted the homework on github. Man, I envy BBF now, he got a shout in the video 😥!
Great 👍 to see you are still making epic lessons videos!
I folded up like a cheap Walmart lawn chair :( I just don't know enough about self and other class stuff. These lessons are great and I am learning more and more with each lesson. Thanks Paul for these great videos.
You might go review one of the earlier lessons where I went over classes, methods, and functions.
I am Legend! I finally got a class working!
LEGEND!
Iced with a hint of sweet. Sipping one now.
These series are amazing! Thank you!
Glad you think so!
Paul, you videos are among the best on UA-cam, in terms of content and your ability to get the subject matter across in an understandable form,
looking to the future have you any plans to make any videos/lessons on FPGAs at all?
Still struggling a little with the classes, mostly with passing and retrieving parameters. I had the code almost exactly the same as yours on the face detection class but I failed to return the data in the double bracketed tuple and I couldn't retrieve the data in my main program. I definitely am learning but this part is tough. Thanks for methodically teaching us and reteaching the material so that it sinks in. I feel like I need to go through this whole series again so that it will really sink in.
See if you use my code if it works. Again, might be an issue if you are using different versions of the software.
@@paulmcwhorter it's not my software this time, it's my brain. 🤣 Just need to continue learning until it comes easier. It works with your code. I was super close with my code and I was getting the correct data back from my class but I couldn't figure out how to get the data back into my main program. I understood it after I saw your explanation. That's what you said, struggle with it and then when you see it done it will make sense. You're right. Thanks for continuing to teach.
I'm Legendd!
I don’t know why it can’t be used in the latest version(mediapipe 0.8.9.1),it would say incompatible function arguments.
Flip the frame then left=left, right=right. Least they do on my screens.
I use the newer version of mediapipe as they don't have face_detection in the older version anymore. They change the input parameter for the pose and hand in the newer version of mediapipe. You can look at the input parameters by open up pose and hands files in:
Your virture Enivironment folder\Lib\site-packages\mediapipe\python\solutions
For Pose class, the input parameters are following:
def __init__(self,
static_image_mode=False,
model_complexity=1,
smooth_landmarks=True,
enable_segmentation=False,
smooth_segmentation=True,
min_detection_confidence=0.5,
min_tracking_confidence=0.5):
for hand class, the input parameters are following:
def __init__(self,
static_image_mode=False,
max_num_hands=2,
model_complexity=1,
min_detection_confidence=0.5,
min_tracking_confidence=0.5)
Yes, it is hard to 'Future Proof' these videos when they begin to make changes to the libraries. Glad you got it figured out though.
@@paulmcwhorter yes, but in a way it is also good for learning. I like to leave the way how to do it with the new library if you dont mind. 😀 Thank you very much for these lessons. Let me learn a lot about python and AI.
i am legend (before video)
Another great video. But I came unstuck and couldn't get it to work. I went through the whole video again and found
bBox=face.location_data_relative_bounding_box when it should have been
bBox=face.location_data.relative_bounding_box
Then everything worked. It did force me to review the meaning of this statement and where it came from.
hellllllllooooooo classy people
I posted a video of my first attempt at an OpenCV / Tkinter Graphical User Interface (GUI) ua-cam.com/video/PuQ8J5vGcWo/v-deo.html
Building on the foundation Paul has laid for us. Thanks Paul McWhorter!