As usual direct to the point. Excellent job. 3 questions: -why training pose in custom dataset if it is already working in general conditions? -can yolov8 pose be combined with yolov8 tracker without doing to detection inferences? -i am considering to pay for the subscription... what would i get? Thank you again. Top notch honest delivery on you side
1- Train yolov8-pose on custom dataset means to detect keypoints of other objects like keypoints detection of animals or objects of other type.2- For tracking, detection is important. So you need a detector. 3- Some videos codes are for members only. You will get that. You will get a reply to your comment on priority. Glad my videos are helpful!
@@CodeWithAarohi regarding if yolov8 pose could be combined with yolov8 tracker, my point was precisely that. I guess that both NN have detection embedded, so if we want to track poses using yolov8... we would be detecting twice (waste) one for pose and another for tracking the bounding box.
Thanks, but I have a question to ask you, do you know how to get skeleton parameters and save them to a .txt file? I read yolo's docs but didn't see them mention getting this parameter.
Hello, I am doing pose estimation on my custom dataset using Yolov8n-pose. I am having a problem with the data annotation format. can you tell me a website for data annotation and which format we use for Ultralytics?
Amazing work. I am working with pose detection on a custom dataset. Labeling key points on Label Studio. However, it does not have the option to export in "YOLO" format. Could you please tell me how to use JSON/CSV format in YOLOv8 for pose detection? Thank you.
how many fps can Jetson nano get with this YOLOv8 detection? What will be the best library with body detection in terms of performance for Jetson nano 4gb? Thanks ❤
It achieve up to 8 FPS. . The performance limitation might be due to the Jetson Nano's compatibility with JetPack 4 and Python 3.6, suggesting newer devices might perform better with YOLOv8
@@CodeWithAarohi Hi! I am trying to add a rectangle with the nose for a center point. How do I access the x and y coordinate positions of the pose estimation vertex points. Thank you!
Is it possible to make a condition on the coordinates of the key points? In which part of the code can I see the coordinates of each key point and apply conditions to it? My goal is to determine the conditions that, for example, if they occur, the position of the person is inappropriate. thank you so much
Yes, You can decrease the resolution. Also, The processing speed of the computer can also affect the delay. You can check that. Sometimes, The camera or webcam itself can also be a bottleneck. Using a camera with a faster frame rate or lower latency can reduce the delay.
congratulations!! , i am waiting for the next video about the custom dataset with yolo pose. It would be great , if you explain de way to detect keypoint's in a videostreaming. don't take long time please
great tutorial. can you please suggest me a free keypoints annotation tool to create a custom dataset to build a model later on, roboflow the partner of ultralytics doesn't support keypoints detection, thank you
@@CodeWithAarohi No, I have a idea that combine yolov8-pose and mediapline to indentify human activities. Have u have tutorial instruction with this way?
Pose Estimation is the process of estimating the 3D pose (position and orientation) of an object or a person from a 2D image or a video stream. In the context of computer vision, it is often used to track the movement of objects or people in real-time, for applications such as sports analysis, surveillance, robotics, and virtual reality. The process typically involves the following steps: Feature detection: identifying distinctive points or landmarks on the object or person of interest, such as corners, edges, or joints. Feature matching: finding correspondences between the detected features in different frames of the video or image sequence. Pose estimation: using the correspondences to estimate the 3D pose of the object or person relative to the camera, usually by solving a system of equations that relates the 2D image coordinates to the 3D world coordinates.
As usual direct to the point. Excellent job. 3 questions:
-why training pose in custom dataset if it is already working in general conditions?
-can yolov8 pose be combined with yolov8 tracker without doing to detection inferences?
-i am considering to pay for the subscription... what would i get?
Thank you again. Top notch honest delivery on you side
1- Train yolov8-pose on custom dataset means to detect keypoints of other objects like keypoints detection of animals or objects of other type.2- For tracking, detection is important. So you need a detector. 3- Some videos codes are for members only. You will get that. You will get a reply to your comment on priority. Glad my videos are helpful!
@@CodeWithAarohi regarding if yolov8 pose could be combined with yolov8 tracker, my point was precisely that. I guess that both NN have detection embedded, so if we want to track poses using yolov8... we would be detecting twice (waste) one for pose and another for tracking the bounding box.
Thanks, but I have a question to ask you, do you know how to get skeleton parameters and save them to a .txt file? I read yolo's docs but didn't see them mention getting this parameter.
Same issue here
How can the keypoints be extracted?
I have a similar issue, because some time not all the body part are visible.
could you please guide us how to do it propaly.
Great as usual. Thank you!
Glad you enjoyed it!
Hello, I am doing pose estimation on my custom dataset using Yolov8n-pose. I am having a problem with the data annotation format. can you tell me a website for data annotation and which format we use for Ultralytics?
Ultralytics recommends using their JSON2YOLO tool to convert your existing dataset from other formats to YOLO format for pose dataset annotation.
Fantastic! Thanks so much!
You're very welcome!
IN key point detection, how do we know which key point coordinates are for wrist joint ?
In COCO, the keypoints are indexed from 0 to 16. The left wrist is assigned index 4, and the right wrist is assigned index 7.
@@CodeWithAarohi thanks
@@CodeWithAarohi where can I find remaining key points details. Than you so much
Amazing work. I am working with pose detection on a custom dataset. Labeling key points on Label Studio. However, it does not have the option to export in "YOLO" format.
Could you please tell me how to use JSON/CSV format in YOLOv8 for pose detection?
Thank you.
import json
import os
def convert_coco_to_yolo(coco_json_path, output_dir):
# Load COCO JSON file
with open(coco_json_path, 'r') as f:
coco_data = json.load(f)
# Create output directory if it doesn't exist
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Iterate over each image in the dataset
for image_data in coco_data['images']:
image_id = image_data['id']
image_name = image_data['file_name']
image_width = image_data['width']
image_height = image_data['height']
keypoints_list = []
# Find annotations for the current image
for annotation in coco_data['annotations']:
if annotation['image_id'] == image_id:
keypoints = annotation['keypoints']
keypoints_list.append(keypoints)
# Skip images without annotations
if not keypoints_list:
continue
# Create YOLO annotation file
annotation_file_name = os.path.splitext(image_name)[0] + '.txt'
annotation_file_path = os.path.join(output_dir, annotation_file_name)
with open(annotation_file_path, 'w') as f:
for keypoints in keypoints_list:
# Find bounding box coordinates
x_min = min(keypoints[0::3])
y_min = min(keypoints[1::3])
x_max = max(keypoints[0::3])
y_max = max(keypoints[1::3])
# Normalize bounding box coordinates to range [0, 1]
x_center = (x_min + x_max) / (2 * image_width)
y_center = (y_min + y_max) / (2 * image_height)
width = (x_max - x_min) / image_width
height = (y_max - y_min) / image_height
# Write the annotation to the YOLO file
f.write(f'{0} {round(x_center,6)} {round(y_center,6)} {round(width,6)} {round(height, 6)} ')
# Append normalized keypoints to the annotation
for i in range(0, len(keypoints), 3):
x = round(keypoints[i] / image_width, 6)
y = round(keypoints[i + 1] / image_height, 6)
v = round(keypoints[i + 2], 6)
f.write(f'{x} {y} {v} ')
f.write('
')
print('Conversion complete.')
# Example usage
coco_json_path = 'path of coco file'
output_dir = 'output dir path'
convert_coco_to_yolo(coco_json_path, output_dir)
@@CodeWithAarohi Thank you so much
Very informative video
how many fps can Jetson nano get with this YOLOv8 detection? What will be the best library with body detection in terms of performance for Jetson nano 4gb? Thanks ❤
It achieve up to 8 FPS. . The performance limitation might be due to the Jetson Nano's compatibility with JetPack 4 and Python 3.6, suggesting newer devices might perform better with YOLOv8
Great!! Simple and clear.
Glad it was helpful!
@@CodeWithAarohi Hi! I am trying to add a rectangle with the nose for a center point. How do I access the x and y coordinate positions of the pose estimation vertex points. Thank you!
Is it possible to make a condition on the coordinates of the key points? In which part of the code can I see the coordinates of each key point and apply conditions to it? My goal is to determine the conditions that, for example, if they occur, the position of the person is inappropriate.
thank you so much
i am working on a similar project, did you figure out how to do it?
ossm Excellent Video Mam
Thanks a lot
Is there any way to reduce the delay when running real-time detection with cap.read() (webcam or video)?
Yes, You can decrease the resolution. Also, The processing speed of the computer can also affect the delay. You can check that. Sometimes, The camera or webcam itself can also be a bottleneck. Using a camera with a faster frame rate or lower latency can reduce the delay.
congratulations!! , i am waiting for the next video about the custom dataset with yolo pose. It would be great , if you explain de way to detect keypoint's in a videostreaming. don't take long time please
Thank you! Will do!
@@CodeWithAarohi A fall detection video with yolo8 pose would be great
great tutorial.
can you please suggest me a free keypoints annotation tool to create a custom dataset to build a model later on, roboflow the partner of ultralytics doesn't support keypoints detection, thank you
Good work.. Carry on..
I will try my best
Thanks, but i have some notations can you make zoom in to bigger font
Is new video for custom dataset out?
Not yet!
how can I identify the human activities after getting keypoint detection. Thank for your tutorials.
For that train an object detection model for activities you want to detect.
@@CodeWithAarohi No, I have a idea that combine yolov8-pose and mediapline to indentify human activities. Have u have tutorial instruction with this way?
mam have you uploaded any video for custom dataset for yolov8 pose
No but I did it with yolov7
Can someone please help how to use web cam to do yolo v8 pose estimation?
Sure
this video very helpful for me mam, mam can you explain briefly about Pose Estimation using ComputerVision?
Sure
Pose Estimation is the process of estimating the 3D pose (position and orientation) of an object or a person from a 2D image or a video stream. In the context of computer vision, it is often used to track the movement of objects or people in real-time, for applications such as sports analysis, surveillance, robotics, and virtual reality.
The process typically involves the following steps:
Feature detection: identifying distinctive points or landmarks on the object or person of interest, such as corners, edges, or joints.
Feature matching: finding correspondences between the detected features in different frames of the video or image sequence.
Pose estimation: using the correspondences to estimate the 3D pose of the object or person relative to the camera, usually by solving a system of equations that relates the 2D image coordinates to the 3D world coordinates.
@@CodeWithAarohi Thank you so much mam
Thanks! Can you also run YOLOv8 pose estimation on an OAK-D camera with VPU?
can you teach us on how to implement yolov8 keypoints detection on android?
I will try to do a video after finishing the pipelined tasks
Thank you very much@@CodeWithAarohi
Thanks is not enough to you . ❤
Glad you enjoyed my video 🙂