- 16
- 48 226
Dr Rezaei . University of Leeds
United Kingdom
Приєднався 29 лис 2019
I am an Associate Professor of Computer Vision and Machine Learning at the University of Leeds. Having 15+ years of expertise in Academics and Industry, my expertise and reseresearch interests are in Autonomous Vehicles, Driver Behaviour Monitoring, Pedestrian Activity Recognition, Object Detection, and Tracking.
In this channel, I share some of the interesting outcomes of our research in the field including the link to source codes!
I hope the University Students, PhDs, and Researchers find the content of this channel useful and enjoy it.
In this channel, I share some of the interesting outcomes of our research in the field including the link to source codes!
I hope the University Students, PhDs, and Researchers find the content of this channel useful and enjoy it.
Self Driving Cars and Autonomous Vehicles Technology
A brief review of the Top 10 Computer Vision and AI-based technologies in #AutonomousVehicles, #RoboTaxis, and #SelfDrivingCars. In this video you will see how automated cars perceive the traffic and road environment via various sensor modalities and #AItechnology to interpret the traffic scene, predict the road users' behaviour, and estimate the moving trajectory of other vehicles.
...
Video Timestamps:
0:00 Pedestrian crossing intention prediction
0:04 Road Users Detection
0:13 Monocular 3D object detection and speed estimation
0:18 LiDAR point cloud and depth estimation
0:20 Driving simulation at the University of Leeds
0:24 360 degrees LiDAR sensors
0:35 Joining the roundabout traffic flow
0:41 Driver state monitoring
0:53 Automated driving in adverse weather conditions
1:10 Our technology partners in the Hi-Drive flagship project 2021-2025
This video represents a summary of the current activities in my research group at the University of Leeds, UK in collaboration with the pioneer car companies and OEMs.
For more information please visit other videos on this channel or my university webpage: environment.leeds.ac.uk/transport/staff/9408/dr-mahdi-rezaei
...
Video Timestamps:
0:00 Pedestrian crossing intention prediction
0:04 Road Users Detection
0:13 Monocular 3D object detection and speed estimation
0:18 LiDAR point cloud and depth estimation
0:20 Driving simulation at the University of Leeds
0:24 360 degrees LiDAR sensors
0:35 Joining the roundabout traffic flow
0:41 Driver state monitoring
0:53 Automated driving in adverse weather conditions
1:10 Our technology partners in the Hi-Drive flagship project 2021-2025
This video represents a summary of the current activities in my research group at the University of Leeds, UK in collaboration with the pioneer car companies and OEMs.
For more information please visit other videos on this channel or my university webpage: environment.leeds.ac.uk/transport/staff/9408/dr-mahdi-rezaei
Переглядів: 1 901
Відео
3D-Net: Monocular 3D object recognition for traffic monitoring
Переглядів 19 тис.3 роки тому
Finally, our extensive research for 3D vehicle/pedestrian detection and interactions is out the SOURCE CODE (provided on 02/05/2023)! We expect a high level of interest, especially from the #ComputerVision, #DNN, #VehicleAutomation, and #ITS research community. Open Access Article: www.sciencedirect.com/science/article/pii/S0957417423007558 Source code: codeocean.com/capsule/7713588/tree/v1 Tha...
Towards Highly Automated Vehicles
Переглядів 7653 роки тому
This video represents a selection of our research outcomes at the University of Leeds, Institute for Transport Studies for Vehicle Automation, including: • 2D Road and Traffic Perception • 3D Object Detection and Traffic Monitoring • Driver Activity Recognition • Driver Head Pose and Behaviour Monitoring. #ComputerVision #MachineLearning #DeepLearning #DriverMonitoring #Driverbavior #TrafficPer...
Distance Estimation & Decision Making at Roundabouts
Переглядів 1,5 тис.3 роки тому
In this video, we detect vehicles and their distances at a roundabout in order to make a decision for an Autonomous Vehicle (AV) to merge and enter the roundabout when it is safe and appropriate to do so. Look at the number of times we offer a green signal for the AV, but the human driver misses the opportunities to merge (particularly at time 0:17 and 1:00); possibly due to the lack of concent...
Autonomous Vehicles: Road & Driver Monitoring
Переглядів 5893 роки тому
This video represents a selection of our research outcomes at the University of Leeds, Institute for Transport Studies for Vehicle Automation, including: • 2D Road and Traffic Perception • 3D Object Detection and Traffic Monitoring • Driver Activity Recognition • Driver Head Pose and Behaviour Monitoring. #ComputerVision #MachineLearning #DeepLearning #DriverMonitoring #Driverbavior #TrafficPer...
Driver Behaviour Monitoring
Переглядів 1,9 тис.3 роки тому
This video represents a very precise driver behaviour monitoring, distraction and drowsiness detection. The system is able to detect the following status using a single RGB camera: • Distraction • Yawning • Drowsiness (delayed blinking) • Head nodding (sleeping) • Looking left / right • Texting left / right • Phoning left / right • Smoking • Speaking • Makeup • Searching around The first part o...
Road Monitoring and Traffic Perception (car/bus/truck/cyclist/pedestrian/ traffic-light detection).
Переглядів 6693 роки тому
Precise Object Detection and Traffic Perception for Autonomous Vehicles. This video demonstrates a sample result of our exciting research towards Autonomous Vehicles in joint collaboration with my PGR research students at the University of Leeds, Institute for Transport Studies. Joint work with Mohsen Azarmi and Farzam Mohammadpoor. More information about our activities: environment.leeds.ac.uk...
3D Object Detection and Tracking using YOLO4 in Autonomous Vehicles
Переглядів 15 тис.3 роки тому
The video represents state-of-the-art 3D object detection, Bird's eye view localisation, Tracking, Trajectory estimation, and Speed detection using a basic surveillance camera and YOLOv4 Deep Neural Network framework. This video is part of our research conducted at the University of Leeds, Institute for Transport Studies, UK by Mahdi Rezaei and in joint collaboration with Mohsen Azarami and Far...
2D Object Detection and Tracking using YOLO4 in Autonomous Vehicles
Переглядів 1,8 тис.3 роки тому
The video represents state-of-the-art 2D object detection, Bird's eye view localisation, Tracking, Trajectory estimation, and Speed detection using a basic surveillance camera and YOLOv4 Deep Neural Network framework. This video is part of research work at the University of Leeds, Institute for Transport Studies, UK, by Mahdi Rezaei and in joint collaboration with Mohsen Azarami and Farzam Moha...
DeepSOCIAL: A Deep Learning Based Social Distancing Monitoring
Переглядів 1 тис.4 роки тому
"DeepSOCIAL: Social Distancing Monitoring and Infection Risk Assessment in COVID-19 Pandemic" is peer-reviewed research, published by the Journal of Applied Sciences 2020- Especial Issue in Fighting COVID-19 (Free Access: doi.org/10.3390/app10217514) In light of contribution towards global health and safety during the COVID-19 pandemic, we made it to a very accurate automated social distancing ...
People Detection and Tracking
Переглядів 5464 роки тому
This video represents a very accurate people detection and tracking model we developed for Social-Distancing monitoring in COVID-19 pandemic. For more information please refer to our paper at www.medrxiv.org/content/10.1101/2020.08.27.20183277v1 and arxiv.org/abs/2008.11672
Crowd Map Analysis for Social Distancing Monitoring
Переглядів 4204 роки тому
This is the crowd heat map we developed for Social Distancing violation detection and analysis in COVID-19 Pandemic. For further information please read our Open Access paper at: doi.org/10.3390/app10217514 Update: The code is now available on our Github page: github.com/DrMahdiRezaei/DeepSOCIAL
DeepSOCIAL: Social Distancing and Monitoring in COVID-19 Pandemic
Переглядів 1,2 тис.4 роки тому
DeepSOCIAL: Social Distancing Monitoring and Infection Risk Assessment in COVID-19 Pandemic In light of our contribution towards global health and safety during the COVID-19 pandemic, we made it to a very accurate automated social distancing monitoring methodology using Computer Vision and Deep Neural Networks. The model is evaluated on the Oxford Town Centre dataset with superior results. You ...
DeepSOCIAL: Social Distancing Monitoring, People Detection and Tracking
Переглядів 6564 роки тому
For months, the World Health Organisation (WHO) and scientists believed that COVID-19 is only transmittable via droplets emitted when people cough or sneeze and that they do not linger in the air. However, on 8 July 2020, WHO accepted there has emerging evidence that COVID-19 can be spread by tiny particles suspended in the air after people talk or breathe, especially in crowded, closed environ...
Spotlight - Head Pose Estimation & Vehicle Detection
Переглядів 7694 роки тому
A Video Spotlight Related to the Book: Computer Vision for Driver Assistance: Simultaneous Traffic and Driver Monitoring. 2017-2018 amazon.com/author/mahdirezaei MahdiRezaei.auckland.ac.nz
This is kind of amazing!
Appreciated!
I've been following this project for a long time, and I'm not sure when the creator will release a detailed tutorial.
Please see the video description. It is now 7 months that we have published the code. See the video description :)
I'm very interested in this work. Is it possible to apply it to a custom dataset? I guess the answer is yes, but it might take a lot of work.
Yes, please refer to the published paper and code
Sir, could you please provide the code for training the model (where you trained the model on the dataset)?
The details are provided in the paper + code.
Do I need a pretrained model for 3d boxes or can I use transfer learning or better running 3d box algorithm instead on 2d BB trained model?
The code is shared now! You may have a look
We are pleased to let you know that the source code is just shared! Please revisit.
can your share please code link?
The srouce code is available at github.com/DrMahdiRezaei/DeepSOCIAL hope you enjoy it.
Your project is really awsome! Can you please share the code?
Good news. The code is shared now! Please revisit.
top
Hello Mr. Rezaei, I tried to connect with you by mail, I hope to hear from you regarding my queries
Just found your email in my spam folder. I am afraid but we do not work on parking lots monitoring and have no time to investigate if further for you. You may better to contact someone who directly works in that area.
Do you have any github code available? It would be awesome to play around with real code. Amazing stuff you are doing out there!
Not yet
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
Great, how do you map the prespective view to a bird view, in other words, how do you translate the x,y pixel to the world coordination?
Please refer to the our pre-print and the given references for BEV mapping.
We are pleased to let you know that the source code is just shared! Please revisit.
how many fps do you process? i believe you dont need 60 fps or even 30 fps for traffic monitoring
We process 30 fps. Thanks for asking
Could you pls share the code?
The code can not be released at this stage due to commercialisation of the project.
We are please to let you know that the source code is shared now! Please revisit.
Is there a research paper on this or a link where I can get more details on the implementation?
No. We have not published a paper for this project.
so you simply used a simple monocular camera to do the depth estimation using a CNN? is this an end-to-end model? or the pipeline is composed of several discrete/separate components?
We have used a monocular camera and CNN for vehicle detection plus adaptation of an inverse perspective model for distance estimating.
good job,your works in Chinese vedio is more than 10000 hits
Thank you!
We are pleased to let you know that the source code is now shared! Please revisit.
Amazing article! Thank you so much for including a link to it
Glad you find it useful!
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
sir code is where?
We have not shared the code.
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
I noticed in video closer car showing far and far car closer distance. i have captured screenshot of the error. car wider its width has closer distance and smaller width far distance. but in video both at the very near to each other. Accuracy is suffering here.
Thank you for your comment. Obviously the accuracy is not always 100%. Our distance calculation reference is the location of the lower edge of the bonding boxes. If for any reason (e.g. noise, occlusion, etc) any errors occurs in identifying a proper bonding box for each vehicle, there would be some minor errors in distance estimation. However, this rarely happens in close distance vehicles. Fortunately, in roundabout, the decision making is based on the closest vehicles around the ego-vehicle, not all vehicles or very far vehicles. Please also note this algorithm only uses a simple camera. No other sensors such as LiDAR or RADAR is used for distance estimation. Hope that helps.
Please share source codes
We may share the code after official publication of our article which is currently under review.
@@DrMahdiRezaei Thanks for reply.
Can you share codes
We may share the code after official publication of our article which is currently under review.
@@DrMahdiRezaei Thank you. let us know once available.
Great job, and I have doing the same job but I need a hight quality datasets, could you share the trainning datasets source?
Thanks for your interest. We have not used this video for training purpose. It is only used for testing. The video belong to the University of Leeds and it is protected under the UK General Data Protection Regulation (GDPR). You need to use other similar CCTV videos.
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
So basically we can make a robot which detects human and: If box_is_blue { Kill }
Well, this is not the purpose of this project :)
What a good idea, did source code will release soon ?
Thank you for your interest Didi. Currently we are discussing with some 3rd parties who are interested in commercialising this project. Therefore, unfortunately, we are not allowed to release the code at this stage.
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
Hi, May i ask share MIO-TCD dataset, Because original link disappered. and only small part of MIO-TCD dataset available on kaggle. Thanks
Hi. Thanks for your comment, but only the owners of the MIO-TCD dataset should share it. We can not and should not re-share the dataset of others. It is their responsibility to fix their web-link.
Wow its great project. Amazing
Thank you! Cheers!
We are pleased to let you know that the source code is now shared! Please revisit.
Hi ,it's really nice work! Can you share the MIO-TCD dataset ? Thanks!
It is not our dataset. Just Google it and it is there.
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
Will the code for this project be released anytime soon?
We are discussing with Leeds City Council. If they want it as a commercialised product, we may not be able to share it, unfortunately. However, we have already provided a good amount of details in our open access article. Hope that helps.
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
it is public avaiable?
No yet, Hammad.
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
Hi, this is amazing. Does this only use camera to do both object tracking and distance estimation?
Thanks. Yes, it only uses the camera and location of the ego-vehicle
Great Job, congrats on the new paper.
Thank you!
We are pleased to let you know that the source code is just shared! Please revisit.
Oh, very interesting for engineering traffic in urban.
Thanks for your interest and your comment!
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
This project is fabulous.hope you win top conference!
Thank you!
We are pleased to let you know that the source code is just shared! Please revisit. We also published our work in Elsevier journal of Expert Systems with Applications (Impact Factor: ~ 9.0)
Congratulation! I have looking for it for a long time!
Hope you like it!
We are pleased to let you know that the source code is just shared! Please revisit.
Hi Mahdi, this is really awesome demo. May I post this on my social media with credit to you ofcourse?
Given that you intent to provide credit and reference, it would be fine.
Hello, I am curious about how to transform 2dbox to bird eye view representation. It seems that you have done 3D object detection also.
Glad to let you know that the preprint of this research is now available at arxiv.org/abs/2109.09165
Hi, can you please share your published paper regarding this topic?
No publication about this yet. Sorry.
Hi, it is really cool! Can you share your published paper about this and perhaps with the github (if any)? Thanks!
Glad to let you know that the preprint of this research is now available at arxiv.org/abs/2109.09165
@@DrMahdiRezaei Thanks a lot!
You are welcome.
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
Has your research paper released yet?
No publication about this research, yet.
This is really nice project. How you created this road map for bird eye view? and how this road boundaries and object bounding box is exactly matching on this bird eye view? can you please explain? Thank you.
Thanks for your interest. This is gained based on the fusion of CCTV ground image, satellite image, and road semantic segmentation. We will publish our preprint soon, hopefully in a couple weeks from now.
@@DrMahdiRezaei Thanks for your reply. It is like you take small portion of your CCTV ground image (4 src points for perspective transform) then that portion you mapped to satellite image(where we have to give destination 4 points). is it like that? can you please explain this thing.... Thank you..
Glad to let you know that the preprint of this research is now available at arxiv.org/abs/2109.09165
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
This project is fabulous 👍
Glad you liked it!
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
Very interested about your work. I am wondering the progress of your paper
Working on it with some extra features and developments
@@DrMahdiRezaei Thx! And when will we see your paper in preprint version?
Hopefully in a couple of weeks from now. Just performing some further experiments. Thanks for your patience.
Glad to let you know that the preprint of this research is now available at arxiv.org/abs/2109.09165
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
excellent work sir !
Glad you liked it!
Very interesting. What are the computational requirements for the processing? Also, how domain specific are your methods, or how easly could your methods be applied to other domains, such as social video clips?
Thanks. We will publish the pre-print of our work soon and I would say most of your questions are answered there in details. You may check my Google Scholar page from time to time for publications update.
Glad to let you know that the preprint of this research is now available at arxiv.org/abs/2109.09165
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
did u publish this work yet?
Hi Moodi, not yet but we will publish the pre-print of our work soon, as the journal review process may take ages. Please keep an eye on my Google Scholar page for publication updates.
Glad to let you know that the preprint of this research is now available at arxiv.org/abs/2109.09165
I am glad to let you know that the updated version of our aritcle is published in Elsevier "Expert Systems with Applications" plus the SOURCE CODE! See the descrption of our last video for more details.
Hello, do you have any papers or documents related to this problem? I really need it. Thank you very much.
Hi, we will publish the pre-print of our work soon, as the journal review process may take ages.
@@DrMahdiRezaei is this pre-print available yet?
Glad to let you know that the preprint of this research is now available at arxiv.org/abs/2109.09165
Can you download the YOLOv4_DeepSOCIAL-1.ipynb file and run it in your own pc
Not sure what you mean by "your own Pc". As you asking this to know about the computational cost?
halo Mr. Mahdi Rezaei nice project. but can it detect video in real time(without using input video)?
Depending on performance of your system in can be real time.
very nice
Thanks
amazing
Glad you think so!