Awesome demo! I could see this being very useful for DICOM de-identification. Occasionally there will be PHI burned into the pixel data. Thank you for sharing this library.
Hi Sir, (Line number 19) Reader is taking almost 1 hour to complete 5% but gets ended with "ConnectionEndedwitherror" .. why its taking this much time.. Is there any offline method?
@@KarthikArumugham use case - I have a coin on a conveyor belt and trying to extract the year. The coin is in all rotations. How can I get the year of the coin? I can orient the coin or read rotated text. Having issues with this. Any thoughts? Thanks.
Hi Sir, Thank you for this awesome video, how to handle compatibility issues. Example: reader = easyocr.Reader(['en','hi','ta'], gpu=False) # English,hindi and tamil getting below error: *ValueError: Tamil is only compatible with English, try lang_list=["ta","en"]* Do you have any solution/idea to over this issue?
Their models are not trained to interpret many languages combined at the same time. Many regional languages are compatible with English but not with each other.
Interesting question. You need to find tricks that quantify the blurriness of text. If you can segment the text part, you can blur it at various levels and compare the total pixel numbers. Also, it is common to use Laplacian from opencv to check blur in the entire image. You can set a threshold for the filtered image and send the ‘good’ ones to OCR.
Do you have solutions for alpha numeric recognition I have tried all the method finally I’m getting fail tesseract easy ocr etc for example ABC00OO1 and SI1234H
Not sure of this problem, I worked with it on two different systems and I only had smooth experience. You may want to report your issue to them so they can fix it for everyone.
@@DigitalSreeniI experienced this problem when trying to run the program in an interpreter. When I run it in the python environment it performs as you described.
Awesome demo! I could see this being very useful for DICOM de-identification. Occasionally there will be PHI burned into the pixel data. Thank you for sharing this library.
Hello sir, thanks for the video. How we can use blocklist effectively in easyocr?
Hi Sir, (Line number 19) Reader is taking almost 1 hour to complete 5% but gets ended with "ConnectionEndedwitherror" .. why its taking this much time.. Is there any offline method?
Dear sir, In episode 183 the Keras-OCR library has been used but now you are talking on better one! Thanks!!
Yes, you are right. There are many libraries for OCR and I like this one.
does image quality affect the process of extracting text?
What is the roadmap of to develop own model using invoice dataset in ocr
how can we apply it to detect time data on the top of the images like the case of sky images
Thank you for your contents. Will you do mask rcnn videos?
Sorry, not any time soon.
Can you help me with handwritten like any resource are there for handwritten
Thank you for your video, I want "only selected text data show on image-text" as text as your video. How can we do it?
You need to perform some post-OCR operations to select the text you want from the stored text that OCR extracted.
Can anyone suggest, how can we train easy ocr for our own dataset?
Love your videos. What of the text is at 180 degrees (upside down)? Can I still apply this?
Interesting. How do humans read it if it is upside down? Any real use cases where the text is upside down?
@@KarthikArumugham use case - I have a coin on a conveyor belt and trying to extract the year. The coin is in all rotations. How can I get the year of the coin? I can orient the coin or read rotated text. Having issues with this. Any thoughts? Thanks.
Hi Sir,
Thank you for this awesome video, how to handle compatibility issues.
Example:
reader = easyocr.Reader(['en','hi','ta'], gpu=False) # English,hindi and tamil
getting below error:
*ValueError: Tamil is only compatible with English, try lang_list=["ta","en"]*
Do you have any solution/idea to over this issue?
Their models are not trained to interpret many languages combined at the same time. Many regional languages are compatible with English but not with each other.
can I check first whether the text image is blur or not , and then implement this logic
Interesting question. You need to find tricks that quantify the blurriness of text. If you can segment the text part, you can blur it at various levels and compare the total pixel numbers. Also, it is common to use Laplacian from opencv to check blur in the entire image. You can set a threshold for the filtered image and send the ‘good’ ones to OCR.
Can this be used to extract scale bars from microstructure images?
No. This only works with text.
Do you have solutions for alpha numeric recognition I have tried all the method finally I’m getting fail tesseract easy ocr etc for example ABC00OO1 and SI1234H
Not sure, may be EasyOCR works for alphanumeric. Please let others know if you have tried and succeeded or failed.
Dear sir , thanks for the video. i wanted to extract table from figure, but didnt work ..?
EasyOCR is designed to extract text from images. I don't think it has the capability of understanding if something is in a table format.
@@DigitalSreeni do u suggest some python api for the table?
I cannot think of anything off top of my head. Please look into python libraries that are designed for scanning bills and receipts.
That is awesome!!
Thanks.
EasyOCR is ignoring simvols like dots, @ etc..
Unknown C++ exception from OpenCV code
which opencv version you using
0.60 version working finely
easyocr will only load about 4% of the model then quits.
Not sure of this problem, I worked with it on two different systems and I only had smooth experience. You may want to report your issue to them so they can fix it for everyone.
@@DigitalSreeniI experienced this problem when trying to run the program in an interpreter. When I run it in the python environment it performs as you described.
Not sure why it would behave in a different way from the interpreter, weird!