YOLOv8 - Keypoint Detection | YOLOv8-Pose | YOLOv8 pose estimation

Поділитися
Вставка
  • Опубліковано 4 жов 2024
  • Ultralytics released the latest addition to YOLOv8 - Keypoint Detection! 🔥
    Pose estimation refers to computer vision techniques that detect human figures in images and videos so that one could determine, for example, where someone's elbow shows up in an image. It works by detecting key points. Keypoint Detection involves simultaneously detecting people and localizing their key points. Keypoints are the same thing as interest points.
    Key Point Detection can be used for:
    ✅ Posture detection
    ✅ Object pose estimation
    ✅ Face recognition and matching
    ✅ Facial emotion recognition
    #KeyPointDetection #YOLOv8 #ObjectDetection

КОМЕНТАРІ • 57

  • @tech_watt
    @tech_watt 11 місяців тому +4

    How can the keypoints be extracted?

    • @Hnmworld-lg2yz
      @Hnmworld-lg2yz 7 місяців тому +1

      I have a similar issue, because some time not all the body part are visible.
      could you please guide us how to do it propaly.

  • @bb-andersenaccount9216
    @bb-andersenaccount9216 Рік тому +1

    As usual direct to the point. Excellent job. 3 questions:
    -why training pose in custom dataset if it is already working in general conditions?
    -can yolov8 pose be combined with yolov8 tracker without doing to detection inferences?
    -i am considering to pay for the subscription... what would i get?
    Thank you again. Top notch honest delivery on you side

    • @CodeWithAarohi
      @CodeWithAarohi  Рік тому +1

      1- Train yolov8-pose on custom dataset means to detect keypoints of other objects like keypoints detection of animals or objects of other type.2- For tracking, detection is important. So you need a detector. 3- Some videos codes are for members only. You will get that. You will get a reply to your comment on priority. Glad my videos are helpful!

    • @bb-andersenaccount9216
      @bb-andersenaccount9216 Рік тому

      @@CodeWithAarohi regarding if yolov8 pose could be combined with yolov8 tracker, my point was precisely that. I guess that both NN have detection embedded, so if we want to track poses using yolov8... we would be detecting twice (waste) one for pose and another for tracking the bounding box.

  • @minhthanhnguyen8776
    @minhthanhnguyen8776 6 місяців тому

    Thanks, but I have a question to ask you, do you know how to get skeleton parameters and save them to a .txt file? I read yolo's docs but didn't see them mention getting this parameter.

  • @hongbo-wei
    @hongbo-wei 2 місяці тому

    Fantastic! Thanks so much!

  • @pifordtechnologiespvtltd5698
    @pifordtechnologiespvtltd5698 7 місяців тому

    Very informative video

  • @abdelrahimkoura1461
    @abdelrahimkoura1461 Рік тому

    Thanks, but i have some notations can you make zoom in to bigger font

  • @NaderLtaief-l6q
    @NaderLtaief-l6q Рік тому

    great tutorial.
    can you please suggest me a free keypoints annotation tool to create a custom dataset to build a model later on, roboflow the partner of ultralytics doesn't support keypoints detection, thank you

  • @cyberhard
    @cyberhard Рік тому

    Great as usual. Thank you!

  • @luis-alberto-nieto
    @luis-alberto-nieto Рік тому

    congratulations!! , i am waiting for the next video about the custom dataset with yolo pose. It would be great , if you explain de way to detect keypoint's in a videostreaming. don't take long time please

  • @utkarshtripathi9118
    @utkarshtripathi9118 Рік тому

    ossm Excellent Video Mam

  • @Mamunur-illini
    @Mamunur-illini 11 місяців тому

    Amazing work. I am working with pose detection on a custom dataset. Labeling key points on Label Studio. However, it does not have the option to export in "YOLO" format.
    Could you please tell me how to use JSON/CSV format in YOLOv8 for pose detection?
    Thank you.

    • @CodeWithAarohi
      @CodeWithAarohi  10 місяців тому +1

      import json
      import os
      def convert_coco_to_yolo(coco_json_path, output_dir):
      # Load COCO JSON file
      with open(coco_json_path, 'r') as f:
      coco_data = json.load(f)
      # Create output directory if it doesn't exist
      if not os.path.exists(output_dir):
      os.makedirs(output_dir)
      # Iterate over each image in the dataset
      for image_data in coco_data['images']:
      image_id = image_data['id']
      image_name = image_data['file_name']
      image_width = image_data['width']
      image_height = image_data['height']
      keypoints_list = []
      # Find annotations for the current image
      for annotation in coco_data['annotations']:
      if annotation['image_id'] == image_id:
      keypoints = annotation['keypoints']
      keypoints_list.append(keypoints)
      # Skip images without annotations
      if not keypoints_list:
      continue
      # Create YOLO annotation file
      annotation_file_name = os.path.splitext(image_name)[0] + '.txt'
      annotation_file_path = os.path.join(output_dir, annotation_file_name)
      with open(annotation_file_path, 'w') as f:
      for keypoints in keypoints_list:
      # Find bounding box coordinates
      x_min = min(keypoints[0::3])
      y_min = min(keypoints[1::3])
      x_max = max(keypoints[0::3])
      y_max = max(keypoints[1::3])
      # Normalize bounding box coordinates to range [0, 1]
      x_center = (x_min + x_max) / (2 * image_width)
      y_center = (y_min + y_max) / (2 * image_height)
      width = (x_max - x_min) / image_width
      height = (y_max - y_min) / image_height
      # Write the annotation to the YOLO file
      f.write(f'{0} {round(x_center,6)} {round(y_center,6)} {round(width,6)} {round(height, 6)} ')
      # Append normalized keypoints to the annotation
      for i in range(0, len(keypoints), 3):
      x = round(keypoints[i] / image_width, 6)
      y = round(keypoints[i + 1] / image_height, 6)
      v = round(keypoints[i + 2], 6)
      f.write(f'{x} {y} {v} ')
      f.write('
      ')
      print('Conversion complete.')
      # Example usage
      coco_json_path = 'path of coco file'
      output_dir = 'output dir path'
      convert_coco_to_yolo(coco_json_path, output_dir)

    • @Mamunur-illini
      @Mamunur-illini 10 місяців тому

      @@CodeWithAarohi Thank you so much

  • @ucduyvo4552
    @ucduyvo4552 Рік тому

    how can I identify the human activities after getting keypoint detection. Thank for your tutorials.

    • @CodeWithAarohi
      @CodeWithAarohi  Рік тому

      For that train an object detection model for activities you want to detect.

    • @ucduyvo4552
      @ucduyvo4552 Рік тому

      @@CodeWithAarohi No, I have a idea that combine yolov8-pose and mediapline to indentify human activities. Have u have tutorial instruction with this way?

  • @MuhammadRashid-hu3wo
    @MuhammadRashid-hu3wo 9 місяців тому

    Hello, I am doing pose estimation on my custom dataset using Yolov8n-pose. I am having a problem with the data annotation format. can you tell me a website for data annotation and which format we use for Ultralytics?

    • @CodeWithAarohi
      @CodeWithAarohi  9 місяців тому

      Ultralytics recommends using their JSON2YOLO tool to convert your existing dataset from other formats to YOLO format for pose dataset annotation​​.

  • @xQwertTv
    @xQwertTv 9 місяців тому

    how many fps can Jetson nano get with this YOLOv8 detection? What will be the best library with body detection in terms of performance for Jetson nano 4gb? Thanks ❤

    • @CodeWithAarohi
      @CodeWithAarohi  9 місяців тому

      It achieve up to 8 FPS. . The performance limitation might be due to the Jetson Nano's compatibility with JetPack 4 and Python 3.6, suggesting newer devices might perform better with YOLOv8

  • @shantilalzanwar8687
    @shantilalzanwar8687 7 місяців тому

    IN key point detection, how do we know which key point coordinates are for wrist joint ?

    • @CodeWithAarohi
      @CodeWithAarohi  7 місяців тому +1

      In COCO, the keypoints are indexed from 0 to 16. The left wrist is assigned index 4, and the right wrist is assigned index 7.

    • @shantilalzanwar8687
      @shantilalzanwar8687 7 місяців тому

      @@CodeWithAarohi thanks

    • @shantilalzanwar8687
      @shantilalzanwar8687 7 місяців тому

      @@CodeWithAarohi where can I find remaining key points details. Than you so much

  • @matthewcorones
    @matthewcorones Рік тому

    Great!! Simple and clear.

    • @CodeWithAarohi
      @CodeWithAarohi  Рік тому +1

      Glad it was helpful!

    • @matthewcorones
      @matthewcorones Рік тому

      @@CodeWithAarohi Hi! I am trying to add a rectangle with the nose for a center point. How do I access the x and y coordinate positions of the pose estimation vertex points. Thank you!

  • @sridharan5143
    @sridharan5143 Рік тому

    this video very helpful for me mam, mam can you explain briefly about Pose Estimation using ComputerVision?

    • @CodeWithAarohi
      @CodeWithAarohi  Рік тому

      Sure

    • @CodeWithAarohi
      @CodeWithAarohi  Рік тому +1

      Pose Estimation is the process of estimating the 3D pose (position and orientation) of an object or a person from a 2D image or a video stream. In the context of computer vision, it is often used to track the movement of objects or people in real-time, for applications such as sports analysis, surveillance, robotics, and virtual reality.
      The process typically involves the following steps:
      Feature detection: identifying distinctive points or landmarks on the object or person of interest, such as corners, edges, or joints.
      Feature matching: finding correspondences between the detected features in different frames of the video or image sequence.
      Pose estimation: using the correspondences to estimate the 3D pose of the object or person relative to the camera, usually by solving a system of equations that relates the 2D image coordinates to the 3D world coordinates.

    • @sridharan5143
      @sridharan5143 Рік тому

      @@CodeWithAarohi Thank you so much mam

  • @soheil8304
    @soheil8304 Рік тому

    Is it possible to make a condition on the coordinates of the key points? In which part of the code can I see the coordinates of each key point and apply conditions to it? My goal is to determine the conditions that, for example, if they occur, the position of the person is inappropriate.
    thank you so much

    • @mimimemes8938
      @mimimemes8938 6 місяців тому

      i am working on a similar project, did you figure out how to do it?

  • @nhattuyen1123
    @nhattuyen1123 Рік тому

    Is there any way to reduce the delay when running real-time detection with cap.read() (webcam or video)?

    • @CodeWithAarohi
      @CodeWithAarohi  Рік тому

      Yes, You can decrease the resolution. Also, The processing speed of the computer can also affect the delay. You can check that. Sometimes, The camera or webcam itself can also be a bottleneck. Using a camera with a faster frame rate or lower latency can reduce the delay.

  • @samverhezen8778
    @samverhezen8778 7 місяців тому

    Thanks! Can you also run YOLOv8 pose estimation on an OAK-D camera with VPU?

  • @Nantha-f2v
    @Nantha-f2v 9 місяців тому

    mam have you uploaded any video for custom dataset for yolov8 pose

  • @sanjoetv5748
    @sanjoetv5748 7 місяців тому

    can you teach us on how to implement yolov8 keypoints detection on android?

    • @CodeWithAarohi
      @CodeWithAarohi  7 місяців тому

      I will try to do a video after finishing the pipelined tasks

    • @sanjoetv5748
      @sanjoetv5748 7 місяців тому

      Thank you very much@@CodeWithAarohi

  • @Red-dg9ed
    @Red-dg9ed Рік тому

    Thanks is not enough to you . ❤

  • @anilvashisth16
    @anilvashisth16 Рік тому

    Good work.. Carry on..

  • @aakash_shinde
    @aakash_shinde Рік тому

    Is new video for custom dataset out?

  • @R1SK648
    @R1SK648 Рік тому

    Can someone please help how to use web cam to do yolo v8 pose estimation?