Sign language detection with Python and Scikit Learn | Landmark detection | Computer vision tutorial

Поділитися
Вставка
  • Опубліковано 28 вер 2024

КОМЕНТАРІ • 394

  • @ComputerVisionEngineer
    @ComputerVisionEngineer  Рік тому +6

    Did you enjoy this video? Try my premium courses! 😃🙌😊
    ● Hands-On Computer Vision in the Cloud: Building an AWS-based Real Time Number Plate Recognition System bit.ly/3RXrE1Y
    ● End-To-End Computer Vision: Build and Deploy a Video Summarization API bit.ly/3tyQX0M
    ● Computer Vision on Edge: Real Time Number Plate Recognition on an Edge Device bit.ly/4dYodA7
    ● Machine Learning Entrepreneur: How to start your entrepreneurial journey as a freelancer and content creator bit.ly/4bFLeaC
    Learn to create AI-based prototypes in the Computer Vision School! www.computervision.school 😃🚀🎓

  • @jesussachez5468
    @jesussachez5468 Рік тому +18

    Hello from Mexico!
    I love your job, I did each step in the same way as you, and I had no difficulties, I really feel very grateful for the time you spent teaching us.
    Congratulations teacher!
    👨‍🏫

  • @joque4
    @joque4 6 місяців тому +10

    For all who are getting errors like "inhomogeneous shapes" while training on big datasets take into account that the MP Hands processing not always return 42 features (sometimes it just doesn't predict the coordinates well enough).
    To avoid this situations always check the length of every array. You must have the same amount of images and labels, and the labels (landmark coordinates) should have the same shapes.
    Just remove the samples that doesn't return all the landmarks or doesn't work well with the Mediapipe hands solution, to ensure all the data has the same shape and to avoid these numpy errors (and bad models).

    • @RAHUL-dt5xm
      @RAHUL-dt5xm 5 місяців тому +1

      can you help me. when I trained only one gesture nothing else, but the system detects untrained gestures as the trained gesture why? any idea

    • @aryanrana-o6n
      @aryanrana-o6n 5 місяців тому +1

      can you please share the changed code

    • @mohamedlhachimi2933
      @mohamedlhachimi2933 4 місяці тому +2

      i think guys to solve this problem we had to tell the collect data script to save just frames where he could detect our hands else we will store bad models that will ends with this getting errors like "inhomogeneous shapes" , i actually try to solved this problem by not moving my hand when collecting data and making my model else you can try this code to check your images that are stored
      This script will only print the paths of the images that are deleted due to no hands being detected. It won't display any image windows.
      ##########################################"
      import os
      import cv2
      import mediapipe as mp
      def process_and_show(image_path, mp_drawing):
      mp_hands = mp.solutions.hands
      hands = mp_hands.Hands()

      # Read the image
      image = cv2.imread(image_path)
      image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

      # Detect hands and landmarks
      results = hands.process(image_rgb)

      if not results.multi_hand_landmarks:
      print(f"Deleted image: {image_path}")
      # Delete the image with no hands detected
      os.remove(image_path)
      # Path to your data folder containing subfolders
      data_folder = "data"
      mp_drawing = mp.solutions.drawing_utils
      mp_drawing_styles = mp.solutions.drawing_styles
      # Iterate through subfolders
      for folder_name in os.listdir(data_folder):
      folder_path = os.path.join(data_folder, folder_name)
      if os.path.isdir(folder_path):
      print(f"Checking images in folder: {folder_name}")
      # Iterate through images in the folder
      for filename in os.listdir(folder_path):
      if filename.endswith(".jpg") or filename.endswith(".png"):
      image_path = os.path.join(folder_path, filename)
      process_and_show(image_path, mp_drawing)

    • @pawnidixit1084
      @pawnidixit1084 2 місяці тому

      I understood the problem but can't really put it in the program. could you explain it please?

    • @clementdethoor5533
      @clementdethoor5533 15 днів тому

      Just add in create_dataset :
      if (len(data_aux) == 42):
      data.append(data_aux)
      labels.append(dir_)

  • @vignesh.v4247
    @vignesh.v4247 Місяць тому +1

    The best tutorial ever!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @aryanrana-o6n
    @aryanrana-o6n 5 місяців тому +1

    Really Thank you sir. Great Project you helped me a lot to learn many things. After multiple errors solving finally i succeeded in making full project.

  • @sudarsonbharathwaaj1412
    @sudarsonbharathwaaj1412 8 місяців тому

    Thanks a lot bro, I watched many videos and i wasted a lot of time and finally found your video and done my project.

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  8 місяців тому +1

      You are welcome! Glad it was helpful! 😃

    • @RohanVector
      @RohanVector 8 місяців тому +1

      Please send your github link please

    • @RohanVector
      @RohanVector 8 місяців тому +1

      I got lot of error bro please please please please

  • @ivanvijandi2052
    @ivanvijandi2052 8 днів тому

    Mas argentino imposible jsjs, Gran video!

  • @1hpxalphaop741
    @1hpxalphaop741 5 місяців тому

    srsly like the best video, now i can train my custom hand gestures etc. even, thank youu❤❤

  • @mariamartinez4860
    @mariamartinez4860 10 місяців тому +2

    why does it close when you put another hand?

  • @Yousef_Osman2000
    @Yousef_Osman2000 10 днів тому +1

    how do i get that function 18:10 ?

  • @kane_jester
    @kane_jester 11 місяців тому +4

    sir , the projects get closed if more hands are placed in the real-time video , i know that randomforest classifier uses only certain features , is there a way so that the program doesnt close if more hands are in the video

  • @duleamihai2202
    @duleamihai2202 11 місяців тому +21

    For those who faces the error where it can't convert the 'data' values from dictionary data_dict, just make sure that in photo samples you are giving the full hand because if not, there will be inconsistent data and the lists will not have the same lenght inside the data_dict['data']. Do again the photos retrieve part and all should be fine

  • @abdulbarisoylemez2817
    @abdulbarisoylemez2817 11 місяців тому

    thank you my teacher, great a video , i tried it myself, I did it :)

  • @fragileaf1778
    @fragileaf1778 8 місяців тому +1

    The camera crashes when I show more than one hand. Can you tell me how it can be fixed?

  • @szmasclips1774
    @szmasclips1774 2 місяці тому

    Great video but How do you do the collecting images part of the code?

  • @miladsayedi59
    @miladsayedi59 4 місяці тому

    can we make this project with pose detection models like openpose or deeppose? and what is the difference

  • @martinsilungwe2725
    @martinsilungwe2725 Рік тому

    I have just subscribed,
    Currently working on a similar project, fingers crossed I'm at a right place..😂

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      🤞😀 Good luck with your project, Martin! 🙌

    • @martinsilungwe2725
      @martinsilungwe2725 Рік тому

      @@ComputerVisionEngineer Sir i have an error "ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot
      be less than 2.
      ", what can be the problem, im trying to classfy all the alphabet letters, your help will be highly appreciated.

    • @sakshi8806
      @sakshi8806 Годину тому

      ​@@martinsilungwe2725 do you have any solution for it now?

  • @ranjanadevi7965
    @ranjanadevi7965 8 місяців тому

    Hello while executing your codes when i was keeping the number of objects grater than 4 thn trainclassifier was unable to generate model.p file in my device can you help me out to solve this issue

  • @sivaips680
    @sivaips680 3 місяці тому

    model p file is missed on the folder

  • @hamzak2883
    @hamzak2883 Рік тому

    First of all i want to thank you for this tutorial. I want actually to make a program for sign language but i am confused about the Dataset and how to process the Data which i will maybe get as Videos or Images. can you maybe give me some advice.

  • @ocelottes
    @ocelottes Рік тому

    Very cool, i have a question. How can i test de accuracy of the detection?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      Do you mean the accuracy of the hand detection?

    • @ocelottes
      @ocelottes Рік тому

      @@ComputerVisionEngineer yes

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      @@ocelottes it is mediapipe hand detection, if you want to test it's accuracy you would need to take another hand detector to compare mediapipe detections against

  • @Pommesperfektion
    @Pommesperfektion Рік тому +1

    is anywhere your dataset online?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +1

      Hey, the dataset I used in this tutorial is not available. But you can create your own dataset following the steps I provide in the video. 😃🙌

  • @MEGHAJJADHAV
    @MEGHAJJADHAV Рік тому

    How can we make a confusion matrix for the model that was made?

    • @e2mnaturals442
      @e2mnaturals442 8 місяців тому

      hi
      were you able to solve this?
      i used
      import matplotlib.pyplot as plt
      import seaborn as sns
      # class names
      class_names = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K',
      'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']
      # Plot the confusion matrix
      plt.figure(figsize=(15, 15))
      sns.heatmap(conf_matrix, annot=True, fmt='d',
      cmap='Blues', xticklabels=class_names, yticklabels=class_names)
      plt.title('Confusion Matrix')
      plt.xlabel('Predicted')
      plt.ylabel('True')
      plt.show()

  • @pawanrajbhar6377
    @pawanrajbhar6377 Рік тому +1

    data = np.asarray(data_dict["data"])
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (296,) + inhomogeneous part. can u help me where i going wrong ?

    • @foru1854
      @foru1854 11 місяців тому

      bro did you corrected the error pls can you tell me how you did it??

    • @saivaraprasadmandala8558
      @saivaraprasadmandala8558 8 місяців тому

      For those who faces the error where it can't convert the 'data' values from dictionary data_dict, just make sure that in photo samples you are giving the full hand because if not, there will be inconsistent data and the lists will not have the same lenght inside the data_dict['data']. Do again the photos retrieve part and all should be fine

    • @saivaraprasadmandala8558
      @saivaraprasadmandala8558 8 місяців тому

      For those who faces the error where it can't convert the 'data' values from dictionary data_dict, just make sure that in photo samples you are giving the full hand because if not, there will be inconsistent data and the lists will not have the same lenght inside the data_dict['data']. Do again the photos retrieve part and all should be fine@@foru1854

  • @sherwingeorge6959
    @sherwingeorge6959 11 місяців тому

    What python version have you used in this project?

  • @rutujakothale3829
    @rutujakothale3829 5 місяців тому

    i'm getting this error, please help
    Traceback (most recent call last):
    File "d:\sign lang\testing.py", line 27, in
    H, W, _ = frame.shape
    AttributeError: 'NoneType' object has no attribute 'shape'
    INFO: Created TensorFlow Lite XNNPACK delegate for CPU.

    • @rentaroiino1789
      @rentaroiino1789 3 місяці тому

      were you able to find a solution to your problem?

  • @NourashAzmineChowdhury
    @NourashAzmineChowdhury Рік тому +1

    Sir i am getting this error:
    [ERROR:0@0.045] global obsensor_uvc_stream_channel.cpp:156 cv::obsensor::getStreamChannelGroup Camera index out of range
    Traceback (most recent call last):
    File "D:\sign-language-detector-python-master\collect_imgs.py", line 25, in
    cv2.imshow('frame', frame)
    cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:971: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
    while running collect_imgs.py can you help solve it. can provide me the model or data set you have used . it will be help for me

  • @legion4924
    @legion4924 Рік тому

    Hello sir, in this project can uses for 2 hands?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      Hey, yeah sure the project can be adapted for 2 hands. 🙌

    • @legion4924
      @legion4924 Рік тому

      @@ComputerVisionEngineer okay sir thank u🙏

  • @УльянаМедведева-б5и
    @УльянаМедведева-б5и 5 місяців тому

    Hello everyone, please tell me, maybe someone knows why when you run the test you get this error: "ValueError: X has 84 features, but RandomForestClassifier is expecting 42 features as input". how can i fix this?
    Thanks in advance for the answer!

    • @fruitpnchsmuraiG
      @fruitpnchsmuraiG 5 місяців тому

      hey did it work then?

    • @УльянаМедведева-б5и
      @УльянаМедведева-б5и 5 місяців тому

      ​@@fruitpnchsmuraiG
      Yes, the program starts, detects the gesture, but after a minute it shuts down and gives the above error

    • @mohamedlhachimi2933
      @mohamedlhachimi2933 4 місяці тому

      i think guys to solve this problem we had to tell the collect data script to save just frames where he could detect our hands else we will store bad models that will ends with this getting errors like "inhomogeneous shapes" , i actually try to solved this problem by not moving my hand when collecting data and making my model else you can try this code to check your images that are stored
      This script will only print the paths of the images that are deleted due to no hands being detected. It won't display any image windows.
      ##########################################"
      import os
      import cv2
      import mediapipe as mp
      def process_and_show(image_path, mp_drawing):
      mp_hands = mp.solutions.hands
      hands = mp_hands.Hands()

      # Read the image
      image = cv2.imread(image_path)
      image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

      # Detect hands and landmarks
      results = hands.process(image_rgb)

      if not results.multi_hand_landmarks:
      print(f"Deleted image: {image_path}")
      # Delete the image with no hands detected
      os.remove(image_path)
      # Path to your data folder containing subfolders
      data_folder = "data"
      mp_drawing = mp.solutions.drawing_utils
      mp_drawing_styles = mp.solutions.drawing_styles
      # Iterate through subfolders
      for folder_name in os.listdir(data_folder):
      folder_path = os.path.join(data_folder, folder_name)
      if os.path.isdir(folder_path):
      print(f"Checking images in folder: {folder_name}")
      # Iterate through images in the folder
      for filename in os.listdir(folder_path):
      if filename.endswith(".jpg") or filename.endswith(".png"):
      image_path = os.path.join(folder_path, filename)
      process_and_show(image_path, mp_drawing)

  • @caio_pohlmann
    @caio_pohlmann Рік тому +3

    How could you make the project recognize two hands?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +2

      Hey, take a look at our discord server, a member of our community has shared the code to train using 2 hands 💪🙌

    • @LincolinARanee
      @LincolinARanee 11 місяців тому

      @@ComputerVisionEngineer Kindly share your discord server

  • @less_thanONEmin
    @less_thanONEmin 3 місяці тому

    OH PAYTHON

  • @salsabeeltantoush3705
    @salsabeeltantoush3705 Рік тому

    Hi, i have an ELP greyscale external camera that i want to use for this project. I am wondering if all you did in this video still applies if the camera only recognizes black and white without other colors?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      Hey, yes I think you should be ok with a greyscale camera. Let me know how it goes! 🙌

  • @rolandpopa7724
    @rolandpopa7724 Рік тому

    Hello, thank you for the tutorial. Really helps a lot.
    I got the problem where ValueError: X has 84 features, but RandomForestClassifier is expecting 42 features as input.
    The error occurs only when you insert your second hand in the live camera.
    Is the model trained with both hands but configured only for one?
    How can we change the parameters when we train the model to only expect one hand inputs?
    Thank you for your time.

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +1

      Hey Roland, you could edit the algorithm so it only considers data from one of both hands (left or right) and ignores the other one. Take a look at this example I found online toptechboy.com/distinguish-between-right-and-left-hands-in-mediapipe/ 😃🙌

    • @martinsilungwe2725
      @martinsilungwe2725 Рік тому

      I have got the same error did you manage to fix it?

    • @salsabeeltantoush3705
      @salsabeeltantoush3705 Рік тому

      @@martinsilungwe2725 same error also

    • @e2mnaturals442
      @e2mnaturals442 8 місяців тому

      did you finally solve this problem?
      i got the same error and all i did was to pad the sequence to its maximum and the problem got fixed. you also need to note this preprocessing during its inference.

    • @omarortiz5463
      @omarortiz5463 5 місяців тому

      @@e2mnaturals442 hello can you explain me how to do this?

  • @sourabhchandra1740
    @sourabhchandra1740 Рік тому +6

    Hlo Sir, very nice video.... I also want to make a similar project ... But there will a bit difference.. I want to generate the entire subtitle for people who can't speak using their hand gestures during video conferencing in real time.
    Can you please guide me with the same ... Bcoz I completely a beginner. Your help will be appreciated. Thanks in advance. 😀

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +4

      Hey Sourabh, it sounds like a complex and very cool project! I would start by saving all the symbols you detect, its confidence score, and the duration of time you detect them so you can analyze this info later on. This is going to help you to understand the problem a little better and also it is going to help you to define rules in order to achieve your goal. 😃💪

    • @Abhaykumar-bu7ei
      @Abhaykumar-bu7ei Рік тому

      Hi Sourabh were you able to make it if yes could you please share some update or code for the same

  • @LEDAT-AI
    @LEDAT-AI Рік тому +6

    Hello, I have watched your video and found it very informative. However, I was wondering if you could make a video for recognizing different characters for a sequence of movements, for example, the letter "J" or "Z." Thank you for your video.

  • @f1player95
    @f1player95 2 дні тому

    I'm encountering the following error: Exception encountered: Unrecognized keyword arguments passed to DepthwiseConv2D: {'groups': 1}. Can someone help me with this?

  • @susanlaime1318
    @susanlaime1318 5 днів тому

    Hello! Thank you so much for the tutorial!! :)
    Although I have trouble when trying to find the script's code at the very beginning, how can I get the code and connect my camera to get the 100 frames? Is it on GitHub? With what name? It seems to be there only the code that we built in the video...

  • @UtsavKuntalwad
    @UtsavKuntalwad 9 місяців тому +2

    Hello, i was adding new alphabets to the dataset and got this error , unable to solve : " File "D:\Major project\.Major Project\code\train_classifier.py", line 11, in
    data = np.asarray(data_dict['data'])
    ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (400,) + inhomogeneous part."

  • @ajisumiardi6736
    @ajisumiardi6736 2 місяці тому +1

    You're great, Man,, thank you for teaching us and put lots of research first to ensure Windows user can replicate the project too,,
    .
    let me leave a logs here for other Windows users:
    1. dont forget using packages with exactly same version as mentioned in requirements_windows.txt.
    2. Use numpy 1.23.3 version,, I take a sneak peek to your terminal output that give me information if you use numpy with that version,, at first my terminal installed numpy 2.0 version, but no luck, and then dowgrade it,,
    3. If you succesfully Instal Cmake via terminal, but still got error when compiling, I suggest you to install it by install Visual Studio first
    I've spent my first 4 hours dealing with those error before finally made it,,

  • @Ele-zg7zj
    @Ele-zg7zj 4 місяці тому

    File "C:\Users\asus\Desktop
    aul\test.py", line 48, in
    predicted_character = label_dict[int(prediction[0])]
    ^^^^^^^^^^^^^^^^^^
    ValueError: invalid literal for int() with base 10: 'B'

  • @dinithnisal643
    @dinithnisal643 Рік тому +2

    hello Sir, I follow your video for learning about computer vision .
    So I have a trouble with "DATA_DIR = './data'" , Is this file need to import from somewhere or should we need to prepare them? Can you help me to solve this?

    • @peterbarasa9190
      @peterbarasa9190 Рік тому +1

      am also thinking the same. The images seem no to be there

  • @vamsianurag3415
    @vamsianurag3415 Рік тому +2

    Hi, while going through this code i'm getting model_dict = pickle.load(open('./model.p', 'rb'))
    FileNotFoundError: [Errno 2] No such file or directory: './model.p' and I didn't find any model.p file in your repository

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      Hey, you can create the model yourself following the steps I describe in the video. 😃🙌

  • @shwetaevangeline
    @shwetaevangeline 5 місяців тому +3

    Thank you so much, sir for this wonderful project. I've completed my term project easily with the help of your video. Loved how we can create our own data instead of getting it from somewhere else.

  • @zeroboom4
    @zeroboom4 6 місяців тому +1

    I have tried it with arabic Sign language,and it did not working correctly, I get one letter almost every time and it's wrong letter, any ideas that can help me train the model. I got the dataset from kaggle.

  • @hayatlr3000
    @hayatlr3000 Рік тому +4

    great tutorial so helpful for my pfe project i actually have to do hand recognition identification biometric only but the hand contour you explained so well the part "this is the most important thing" and I really need help when it comes to the approach of how i can solve this if it? is possible for you to help me by doing a video of it ?cause its the first time for me working with python i usually work with Matlab. thank you again for this video

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +2

      Hey Hayat, I am glad you found it helpful! 😄 Do you mean making a video about how to be strategic when starting a project and choose the most promising approach? Sure, I can do a video about problem solving strategies! 😃🙌

    • @luongtranle2979
      @luongtranle2979 Рік тому

      Do you have file word report ?

  • @MoominMoomin-f2b
    @MoominMoomin-f2b Місяць тому

    Hello!! Can you tell me which ML algorithm did you use in this?

  • @MoominMoomin-f2b
    @MoominMoomin-f2b Місяць тому

    Hello! can you please tell me which ML algorithm you used here???

  • @akihitonarihisago4276
    @akihitonarihisago4276 4 місяці тому +1

    has anyone tried to implement this for more than 20 classes?

  • @touchwood8404
    @touchwood8404 5 місяців тому +1

    The mediapipe library is giving error in installation what should I do?

  • @akrhythm_
    @akrhythm_ 2 місяці тому

    File "c:\Users\akrut\OneDrive\Desktop\sign-language-detector\sign-language-detector\train_classifier.py", line 9, in
    data_dict = pickle.load(open('./data.pickle', 'rb'))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    FileNotFoundError: [Errno 2] No such file or directory: './data.pickle'
    PS C:\Users\akrut\OneDrive\Desktop\sign-language-detector\sign-language-detector>
    Can anyone help me out with this error?

    • @ProgrammerPenguin
      @ProgrammerPenguin 2 місяці тому

      it says there isn't a directory or file called "./data.pickle"

  • @bdtamilgamers8083
    @bdtamilgamers8083 Рік тому +1

    Sir only 9 character can be trained plz help me to train 26 character

  • @arptv4962
    @arptv4962 Місяць тому

    Hello ! Thank you for the porject. Is anybody knows how to fix this error: H, W, _ = frame.shape AttributeError: 'NoneType' object has no attribute 'shape'. I have done all the steps before until the last program, however, it looks like it doesn't see images or something? have anybody had the same problem?

  • @saurabhmishra7487
    @saurabhmishra7487 4 місяці тому +1

    The app crashes when using both hands. How can I fix this?

  • @swagatbaruah522
    @swagatbaruah522 Рік тому +1

    EVERYTHING IS WORKING FINE, EXCEPT FOR THE FACT THAT THE MY FINAL PROGRAM IS UNABLE TO RECOGNIZE ANY SIGN. IT JUST GIVE EVERY SIGN THE SAME LABEL WHATEVER THERE IS IN THE INDEX 0 OF THE LABEL LIST. I don't understand why its not working???

  • @tihbohsyednap8644
    @tihbohsyednap8644 Рік тому +1

    Hello sir, Kindly solve this error for me ----> ValueError: With n_samples=1, test_size=0.2 and train_size=0.8, the resulting train set will be empty. Adjust any of the aforementioned parameters.

  • @yaranassar1208
    @yaranassar1208 4 місяці тому +1

    Hii!! I loved your video. I learned a lot. I just have one question, if at the end I want to form a sentence and print it, how can I save each character on the screen to have a full sentence at the end?

  • @RohanVector
    @RohanVector 7 місяців тому +1

    Some hand sign have two hand ,than what we can do that situation ?

  • @MrFurious0007
    @MrFurious0007 11 місяців тому +2

    Hello , great tutorial 😀can this same approach be applied for british sign language because that uses both hands to make gestures , also can this be deployed in the real world and used at production level ?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  11 місяців тому +1

      You would need to make some edits in order to use it with both hands but I guess it would work, yes. Regarding the performance, yeah you could train it and improve it so it can be used at a production level. 🙌

    • @MrFurious0007
      @MrFurious0007 11 місяців тому

      thanks @@ComputerVisionEngineer 😁i'll try and see if it works out

    • @MrFurious0007
      @MrFurious0007 11 місяців тому +1

      Hey @@ComputerVisionEngineer , its not working efficiently for the british sign lang , maybe because it uses both hands , do you have any suggestions on how i can build up my project , it'll be a huge help , thanks

  • @ajitesh.4
    @ajitesh.4 20 днів тому

    When I am showing 2 hands in frame it's getting stopped ? How to solve this?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  13 днів тому

      Currently only one hand is supported, you would need to adapt the code and the models so it works with two hands. 🙌

  • @assassinhi4889
    @assassinhi4889 7 місяців тому +1

    it's showing the error: ValueError: setting an array element with a sequence.
    after loading the dictionary in the model.

    • @mohamedlhachimi2933
      @mohamedlhachimi2933 4 місяці тому +1

      i think guys to solve this problem we had to tell the collect data script to save just frames where he could detect our hands else we will store bad models that will ends with this getting errors like "inhomogeneous shapes" , i actually try to solved this problem by not moving my hand when collecting data and making my model else you can try this code to check your images that are stored
      This script will only print the paths of the images that are deleted due to no hands being detected. It won't display any image windows.
      ##########################################"
      import os
      import cv2
      import mediapipe as mp
      def process_and_show(image_path, mp_drawing):
      mp_hands = mp.solutions.hands
      hands = mp_hands.Hands()

      # Read the image
      image = cv2.imread(image_path)
      image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

      # Detect hands and landmarks
      results = hands.process(image_rgb)

      if not results.multi_hand_landmarks:
      print(f"Deleted image: {image_path}")
      # Delete the image with no hands detected
      os.remove(image_path)
      # Path to your data folder containing subfolders
      data_folder = "data"
      mp_drawing = mp.solutions.drawing_utils
      mp_drawing_styles = mp.solutions.drawing_styles
      # Iterate through subfolders
      for folder_name in os.listdir(data_folder):
      folder_path = os.path.join(data_folder, folder_name)
      if os.path.isdir(folder_path):
      print(f"Checking images in folder: {folder_name}")
      # Iterate through images in the folder
      for filename in os.listdir(folder_path):
      if filename.endswith(".jpg") or filename.endswith(".png"):
      image_path = os.path.join(folder_path, filename)
      process_and_show(image_path, mp_drawing)

  • @georgevalentin9483
    @georgevalentin9483 Рік тому +2

    I checked the github repo and there are some changes compared to the video. Why are you substracting the min of x_ from x (data_aux.append(x - min(x_))), also for y ? Why is it necessary to do that instead of just append x the way it is to the array. I saw u did that in the data processing and also in the model testing. Thanks a lot!

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +1

      Hey George! Yeah, I sent that change in a new commit. It makes the solution more robust, you could think about it as a way of 'normalization'. This makes the classifier learn better than the (x, y) position of each landmark is not that important, the distance of each landmark to each other landmark is what matters most! 😃💪

    • @georgevalentin9483
      @georgevalentin9483 Рік тому

      @@ComputerVisionEngineer Thanks a lot for the answer! I thought it has something to do with the mediapipe library and is a must, but it actually makes sense to be some kind of normalization. Thanks for you time!

  • @emnahamdi-wq4mz
    @emnahamdi-wq4mz 11 місяців тому +2

    Hi! Great tutorial thank you. I have a question: does this program have data augmentation? and did u calculate the sensibility and accuracy of the program?

  • @CanalIFES
    @CanalIFES Рік тому +1

    why do you use and random forest classifier algorithm?
    maybe it is better for it?
    could i try with a pretrained model to get better results?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      No particular reason why I used a Random Forest, I think pretty much any other classifier would have a similar performance in this case.

    • @CanalIFES
      @CanalIFES Рік тому

      @@ComputerVisionEngineer Thanks felipe!!

  • @saivaraprasadmandala8558
    @saivaraprasadmandala8558 8 місяців тому

    Error:
    Traceback (most recent call last):
    File "h:\Mini Project\Mallikarjun Project\sign-language-detector-python-master\sign-language-detector-python-master\inference_classifier.py", line 7, in
    model_dict = pickle.load(open('./model.p', 'rb'))
    ^^^^^^^^^^^^^^^^^^^^^^^
    FileNotFoundError: [Errno 2] No such file or directory: './model.p'
    Could u help me out in fixing this error sir!!!!.

  • @AkshatManohar
    @AkshatManohar Рік тому +1

    Hi,
    I am getting an error that ./data/.DS_Store is not a directory and is not found.

  • @NarutoTamilan007
    @NarutoTamilan007 2 місяці тому

    Sir what is your python version

  • @pawanrajbhar6377
    @pawanrajbhar6377 Рік тому

    data = np.asarray[data_dict["data"]]
    ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
    TypeError: 'builtin_function_or_method' object is not subscriptable can u help me where i am going wrong

  • @prithvisingh2851
    @prithvisingh2851 11 місяців тому +1

    I have trained my model using only numbers' data. It is working but the problem is it is only showing the numbers 9 or 1 in the frame. Do you think it's because of unclear data or problem in the training model.
    BTW great tutorial 👍

  • @iinfinixvilla389
    @iinfinixvilla389 2 місяці тому

    Hola from India sir, Sir i enjoyed your video very much. sir, I have a small doubt can you tell me how to check and the accuracy of the model being trained.

  • @WelcomeToMyLife888
    @WelcomeToMyLife888 Рік тому +5

    great tutorial on how to organize the project into separate steps!

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +2

      Good organization is the key to a successful project I am happy you enjoyed the video! 😄🙌

  • @Oof_the_gamer
    @Oof_the_gamer Місяць тому

    what si the data.pickle?

  • @adn4779
    @adn4779 7 місяців тому +1

    @ComputerVisionEngineer ValueError: X has 84 features, but RandomForestClassifier is expecting 42 features as input..I am getting this error when i run the inference_clasifier.py model...What change should i make in the code.....

    • @shwetaevangeline
      @shwetaevangeline 5 місяців тому

      If you're getting this, that means you're showing something else that isn't in the data. Only show what you've captured. Or else simply increase number of classes and take different pictures from different angles.

    • @mohamedlhachimi2933
      @mohamedlhachimi2933 4 місяці тому

      i think guys to solve this problem we had to tell the collect data script to save just frames where he could detect our hands else we will store bad models that will ends with this getting errors like "inhomogeneous shapes" , i actually try to solved this problem by not moving my hand when collecting data and making my model else you can try this code to check your images that are stored
      This script will only print the paths of the images that are deleted due to no hands being detected. It won't display any image windows.
      ##########################################"
      import os
      import cv2
      import mediapipe as mp
      def process_and_show(image_path, mp_drawing):
      mp_hands = mp.solutions.hands
      hands = mp_hands.Hands()

      # Read the image
      image = cv2.imread(image_path)
      image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

      # Detect hands and landmarks
      results = hands.process(image_rgb)

      if not results.multi_hand_landmarks:
      print(f"Deleted image: {image_path}")
      # Delete the image with no hands detected
      os.remove(image_path)
      # Path to your data folder containing subfolders
      data_folder = "data"
      mp_drawing = mp.solutions.drawing_utils
      mp_drawing_styles = mp.solutions.drawing_styles
      # Iterate through subfolders
      for folder_name in os.listdir(data_folder):
      folder_path = os.path.join(data_folder, folder_name)
      if os.path.isdir(folder_path):
      print(f"Checking images in folder: {folder_name}")
      # Iterate through images in the folder
      for filename in os.listdir(folder_path):
      if filename.endswith(".jpg") or filename.endswith(".png"):
      image_path = os.path.join(folder_path, filename)
      process_and_show(image_path, mp_drawing)

    • @luciferani8279
      @luciferani8279 3 місяці тому

      Do not give 2 hands at the same on your camera

  • @tihbohsyednap8644
    @tihbohsyednap8644 Рік тому +1

    Sir kindly help me with this error
    .
    .
    ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2.

    • @tihbohsyednap8644
      @tihbohsyednap8644 Рік тому

      Sir kindly help me with this error. I am working on this project as my final year project and I have to extend it as my major project work.

    • @mohamedlhachimi2933
      @mohamedlhachimi2933 4 місяці тому

      i think guys to solve this problem we had to tell the collect data script to save just frames where he could detect our hands else we will store bad models that will ends with this getting errors like "inhomogeneous shapes" , i actually try to solved this problem by not moving my hand when collecting data and making my model else you can try this code to check your images that are stored
      This script will only print the paths of the images that are deleted due to no hands being detected. It won't display any image windows.
      ##########################################"
      import os
      import cv2
      import mediapipe as mp
      def process_and_show(image_path, mp_drawing):
      mp_hands = mp.solutions.hands
      hands = mp_hands.Hands()

      # Read the image
      image = cv2.imread(image_path)
      image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

      # Detect hands and landmarks
      results = hands.process(image_rgb)

      if not results.multi_hand_landmarks:
      print(f"Deleted image: {image_path}")
      # Delete the image with no hands detected
      os.remove(image_path)
      # Path to your data folder containing subfolders
      data_folder = "data"
      mp_drawing = mp.solutions.drawing_utils
      mp_drawing_styles = mp.solutions.drawing_styles
      # Iterate through subfolders
      for folder_name in os.listdir(data_folder):
      folder_path = os.path.join(data_folder, folder_name)
      if os.path.isdir(folder_path):
      print(f"Checking images in folder: {folder_name}")
      # Iterate through images in the folder
      for filename in os.listdir(folder_path):
      if filename.endswith(".jpg") or filename.endswith(".png"):
      image_path = os.path.join(folder_path, filename)
      process_and_show(image_path, mp_drawing)

  • @itzbumblebee6694
    @itzbumblebee6694 8 днів тому

    do you have research paper of this project?

  • @prathamupadhyay1265
    @prathamupadhyay1265 Рік тому +2

    How can I get accuracy for the letters predicted?
    Basically I want live accuracy for the letters that are predicted , since if you show any random hand gesture it will always predict some random letter, so it will be much better if you could also show live accuracy .Is it possible can u guide me a little bit through this?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +1

      Try using the method 'predict_proba' instead of 'predict'. You wil get a probability vector for all the classes. Taking the largest number will give you the confidence value you are looking for. 💪💪

    • @prathamupadhyay1265
      @prathamupadhyay1265 Рік тому

      @@ComputerVisionEngineer Thanks a lot you are amazing !!! 😃

    • @yashanchule9641
      @yashanchule9641 Рік тому

      @@prathamupadhyay1265 bhai if u dont mind kya app apke code ki zip file mujhe share kar skte hai, coz im getting many errors and i have tried many steps but kuch ho nahi raha hai. PLZ!!!!!!

    • @yashanchule9641
      @yashanchule9641 Рік тому

      plz bhai

    • @054_vishwadhimar4
      @054_vishwadhimar4 Рік тому

      @@yashanchule9641 GitHub link is there..or have you tried that too?!

  • @thesoftwareguy2183
    @thesoftwareguy2183 6 місяців тому +1

    Sir!! You have my respect I have really learned lots of things in your whole video . Just keep making this ML/DL Project videos , that you have done like implementing from scratch any exciting ML/DL project.
    Just Keep Going Sir!!!
    Thankyou So much!!✨✨✨✨✨✨❤❤❤❤❤❤

  • @kiranmahapatra8716
    @kiranmahapatra8716 Рік тому +1

    Sir please help............during training it shows value error..data = np.asarray(data_dict['data'])
    ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (199,) + inhomogeneous part......for 3 class

    • @SohamKaranjkar
      @SohamKaranjkar 9 місяців тому +1

      i got the same error, were you able to solve it?

    • @krzysztofgalek5276
      @krzysztofgalek5276 8 місяців тому +1

      Did u solve it?

    • @e2mnaturals442
      @e2mnaturals442 8 місяців тому

      i was able to sort it using padding
      if you want me to explain more, i will be glad to

    • @Elenas1178
      @Elenas1178 7 місяців тому

      @@e2mnaturals442 please explain

    • @preetirathod5244
      @preetirathod5244 2 місяці тому

      ​@@e2mnaturals442can u please explain it

  • @texsesyt2902
    @texsesyt2902 Рік тому +2

    hello sir i am getting this error
    ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2.
    x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, shuffle=True, stratify=labels)
    i observe that if i remove stratify i donot get error but after that i get
    0.0% of samples were classified correctly !

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому

      Hey, how many different symbols are you trying to classify? How did you collect the data for each symbol?

    • @texsesyt2902
      @texsesyt2902 Рік тому

      @@ComputerVisionEngineer I change number_of_classes to 5 and i collect data through opencv by capturing images(by using the method describe in this video)
      Note: python version 3.11.2

    • @texsesyt2902
      @texsesyt2902 Рік тому

      total 5 symbols each got 0 to 99 images

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +1

      There is a probably a bug with the data. Take a look at 'labels', how many elements are there for the different classes? Is it an array of integers or is it other data type?

    • @texsesyt2902
      @texsesyt2902 Рік тому

      @@ComputerVisionEngineer Now i am getting this error when i make 25 classes(for each alphabet).
      data = np.asarray(data_dict['data'])
      ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2471,) + inhomogeneous part.

  • @darrellardhanihidayat555
    @darrellardhanihidayat555 9 місяців тому +1

    Hi sir, i got some error at inference_classifier.py, the errors says:
    Line 36, in
    H, W, _= frame.shape
    AttributeError: ‘NoneType’ object has no attribute ‘shape’
    Thank you for the help🙏🏻

    • @RohanVector
      @RohanVector 8 місяців тому

      It's fully working for you now?
      Because I cannot able to run the first step please help mee

    • @RohanVector
      @RohanVector 8 місяців тому

      In collect_img is cv2.imshow(frame) is error bro kindly help me

    • @RohanVector
      @RohanVector 8 місяців тому

      Error name :size.width>0 && size.height>0 in function 'cv::imshow'

    • @manasayjoseph1075
      @manasayjoseph1075 8 місяців тому

      can you please show the err
      @@RohanVector

    • @saivaraprasadmandala8558
      @saivaraprasadmandala8558 8 місяців тому

      Change the line to -> cap = cv2.VideoCapture(0)...
      Previously it was -> cap = cv2.VideoCapture(2)@@RohanVector

  • @VnZR_
    @VnZR_ 11 місяців тому +1

    Hi... Since many signs involve some type of movement, I wonder if videos could be used in place of pictures. I hope you can reply to me because your video is very helpful for us. Thanks in advance.

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  10 місяців тому +2

      Yes, you could try with video classification. 🙌

    • @VnZR_
      @VnZR_ 3 місяці тому

      ​@@ComputerVisionEngineer how to insert video type in pycharm?

    • @VnZR_
      @VnZR_ 3 місяці тому

      I hope you can help us..thank you

    • @VnZR_
      @VnZR_ 3 місяці тому

      Is there a front - end that can connect in pycharm?

  • @harshasshet6755
    @harshasshet6755 5 місяців тому

    I am getting plots for every data set size which i have taken is it fine bcs i have plt.savefig function, annotated it so that the plt for every dataset size is saved in main data directory

  • @tharas3368
    @tharas3368 6 місяців тому

    Can i get the document of this peoject please 🙏it's my humble request please

  • @dinem0023
    @dinem0023 4 місяці тому

    in all hand gesture im getting only L what could be the reason can anyone tell me

  • @sandanuwan4441
    @sandanuwan4441 6 місяців тому

    I am new to AI. I just want to know are we using Natural Language, Machine Learning and computer vision.

  • @abdallahsamir2707
    @abdallahsamir2707 Рік тому +1

    Hello, I have watched your video and found it very informative. However, I was wondering what is the limitation of this project?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +2

      Hey, limitation in terms of possible symbols? I would say any static symbol made with only one hand.

  • @e2mnaturals442
    @e2mnaturals442 8 місяців тому +2

    hello from Nigeria
    i must say thanks for this video
    it was short, precise and educative
    yes, i had some errors which i was able to handle due to my past knowledge on Deep Learning. And for those that had issues with the disparity in the length of the data, you can always pad to its maximum length
    currently, i have a model that can identify 26 classes correctly and i will definitely increase the classes. i made each classes to have 700 images under different lighting condition
    thanks for all you do.

    • @ijaspr5486
      @ijaspr5486 8 місяців тому

      bro can you send me the file for your project

    • @e2mnaturals442
      @e2mnaturals442 8 місяців тому

      @@ijaspr5486 like the whole file?

    • @rarir0012
      @rarir0012 5 місяців тому

      Could you share your GitHub link of your project?

    • @aryanrana-o6n
      @aryanrana-o6n 5 місяців тому

      @@e2mnaturals442 yes like github code or i give you my social media id

    • @TheDreamsandTears
      @TheDreamsandTears 3 місяці тому

      can you share your code? I'm having somre errors, while I try do identify the letters. Also, in your code, could you do with signs with both hands and with movements? @e2mnaturals442

  • @ShraddhaRastogi-l4l
    @ShraddhaRastogi-l4l Рік тому

    cv2.imshow('frame', frame)
    cv2.error: OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:971: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
    I am getting this error please someone help me with this ...please....

    • @MEGHAJJADHAV
      @MEGHAJJADHAV Рік тому +1

      Try changing cv2.VideoCapture(2) to cv2.VideoCapture(0)

  • @makiizenin
    @makiizenin Рік тому

    Hello sir, I got a one problem. I made the same with you and my code is worked but it only showed at least 5 mins for capturing then the camera will shutdown automatically and got some errors. :((((

  • @nilayguler8397
    @nilayguler8397 6 місяців тому

    Thanks a lot! I really appreciate keeping this under an hour as well :)) We are trying to implement this model in Flutter to develop a mobile app. How can we create Flutter integration ?

  • @septian5761
    @septian5761 4 місяці тому

    can i ask how can you moved this into mobile / android studio

  • @raziehahmadi4185
    @raziehahmadi4185 4 місяці тому

    Thanks for your good tutorial
    How to act for the rest of the letters?

  • @Om-id1qr
    @Om-id1qr Рік тому +1

    Great tutorial! Can you tell me how can I do this for Indian Sign Language which uses 2 hands?

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +1

      I am looking at the Indian sign language alphabet and I see some characters are done with 2 hands and others with 1 hand. In order to do something based on landmarks as we did on this video you would have to train 2 classifiers, one of them taking as input the landmarks of one hand only (as we did on the video) and the other classifier taking as input the landmarks of both hands. Then some logic to apply one classifier or the other one depending on how many hands appear on the frame. Or, you can just follow a different approach and train an image classifier taking the crop of the hand/s. 💪🙌

    • @v5j7bxb
      @v5j7bxb 5 місяців тому

      Hi ! Have you completed working on this project? Did it worked ?

  • @harshasshet6755
    @harshasshet6755 3 місяці тому

    I am facing wierd problem actually i have altered your project for all 26 alphabets but whatever I show I am getting wrong alphabets

  • @nitishsaini63
    @nitishsaini63 10 місяців тому

    raise ValueError(
    ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2. please any can resolve this error

  • @abhikpanda1581
    @abhikpanda1581 7 місяців тому

    The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2. The above error is coming . Can anybody pls help me out?

  • @febriandewanto2447
    @febriandewanto2447 5 місяців тому

    Thank you, very clear what was taught. I want to ask what if the dataset from a public video had the initial and final movements? whether the start and end frames go into training . and using deep learning?

  • @iantang2048
    @iantang2048 11 місяців тому

    Hi sir,
    Thanks for your tutorial.
    Yet, I a problem in locating the file(./data), and received an error message of [Errno 20] Not a directory: './data/.DS_Store'. while using "create_dataset.py". Currently all file are put in desktop, do you know why? (I m using MacBook)

    • @gXLg
      @gXLg 11 місяців тому +2

      The thing about Apple is that MacOS often puts a file called ".DS_Store" in the directory which stores some information. In your code where you iterate over folders, compare the name with ".DS_Store" and simply skip it

  • @travisfernandes5387
    @travisfernandes5387 6 місяців тому

    how to make this project on web based like on react or flask

  • @AnupamMoharana
    @AnupamMoharana Рік тому +1

    Expected 84 features, but got 42
    This error pops up every time

    • @ComputerVisionEngineer
      @ComputerVisionEngineer  Рік тому +1

      Hey Anupam, seems like you are training the classifier with twice the features as you use in inference. Are you training the classifier using gestures from one hand only?

    • @AnupamMoharana
      @AnupamMoharana Рік тому

      @@ComputerVisionEngineer Yeah I used 1 hand only i will try with both

    • @ashokreddy6602
      @ashokreddy6602 Рік тому

      i'm getting same error how to resolve it and i used one hand only

    • @pirson910
      @pirson910 Рік тому

      @@ashokreddy6602 u probably had to collect again the images

  • @lolalikee
    @lolalikee 11 місяців тому

    Can the project created by exported to an .exe? Im worried because of the pickle file.