Maybe geo-location would add an extra layer of accuracy? Extract GPS metadata in photos from over the web to gather information of non-flooded images compared to flooded images taken by the user at the 'scene'. Extracting the geo-location and mapping the image features of same gps co-ordinates and then to overlay images of the same location to get more accuracy. I could probably spend more time explain that better! But I think you get it!
Really nice theoretical explanation and approach on how a problem has to be solve , I have graduated last year in Software Eng and men you did way better than most of my teachers
This is cool, so here's some advice that anyone is free to use and follow, too: I think there is still some useful information about the scene, encoded in the color information, but not enough for high precision. So I think the training images should be converted into a color space with one luma channel, and two chroma channels. The catch is: The chroma channels can be reduced to a bit depth of 3 or 4, and reduced in resolution even more, by dividing the x and y resolutions by 2, to get ¼ the number of pixels. It doesn't need to stop there, either; you could probably reduce the luma channel to 6 or 7 bits per pixel, instead of the standard 8, to get just a little more space savings. Just remember, that with each bit you take off the bit depth, your dividing the color precision by 2; or in other words, 8 bits can store 256 values of color, but 7 bits will only store 128 values of color. If you decrease the bit depth of the luma channel, it may pay off to increase the overall resolution of the training image just a little, because there may so good information in the finer details of the shapes in the image. Then again, there may not be; so it might be good to experiment with both options. I hope someone finds this useful!
That simulated data set got me. I thought you were actuating going to have to spend 16 years collecting flood data (or call off the project). Well played, J
Hey Jabrils, I just discovered you via a Mark Rober video (stealing signs!). Mannn, I am impressed by your work. Granted, it is light years beyond my comprehension but I still love the work and how you present it. My guess is that you are (and will continue to be) an inspiration for younger kids to go into the sciences and maths. I wish I had enough time left on this earth (I'm 65) to be able to understand everything you talk about but I just wanted to say I will continue to watch your videos (and Marks) in hopes that SOME of it wears off on this old guy. Cheers!
Jabrils, another great video. You put a lot of work into this. I hope to get as good as you in ML soon. This definitely gave me some inspiration to keep going!
Jabrils, your videos are so entertaining and you explain everything to the viewer really well! At times where I was confused on a certain term, you explain it in the video! You make watching and learning through these videos so much more fun and interactive! Keep doing what your doing!
I think you should first build a model to predict if there is water in the picture. This model can be both used to preprocess the traning data and as a first layer on the app This model can give a score on how confident it is that water is in the picture and function as a confidence value for the final prediction (i.e if the confidence level is low the app should ask the user for another photo instead of giving a bad prediction)
THANK YOU for addressing the importance of accurate labelled data. It's a thing that I almost never see discussed in ML projects and that's something I'm VERY concerned about.
hey so, basically, in keras, if you have a large dataset, you should train using a generator. kerases fit_generator takes a generator as a arguemnt that should return the data of ONE image also, make sure to include example of unflooded images, and surfaces that resemble water but aren't, like glass
I stay in India and there's a lot of flood going on, I am so glad that you posted a solution and I can use your brilliant idea and help the people of my country, a big thank you! Loved your vdo, always been a fan of your work.
This is a really cool project! As a CS student, I really appreciate the idea of a Progress Ladder. I'll definitely try to add this as a problem solving tool
When you said that you've gotten help from IBM I was astounded. It's really just readiness to do things like that to do them! I'll try my luck with my own ideas, thanks for that great video!
Convolutional neural nets are what I'm primarily interested in getting into, so appreciated this vid. This channel in general is so informative and helpful.
Before I get to it, I've gotta say I really like your videos- I've learned a lot watching you, and dig your style. I'm using your lessons to learn neural networks, and looking to implement them in my field, which is automated machine controls. But here's a way to save a LOT of time on this one- make the app just put two buttons on the screen, and a question- "Is there water over the road?" The buttons will say "Yes" and "No". If the user clicks "Yes", change the screen to "Turn around, do not drive through". There is NEVER a safe time to drive through water over a road. If the water is moving, it's nigh insane to drive through it. Roads are not designed to be submerged, and even before the water makes it to the top of the road, it's starting to erode the foundation of the road. The water could be an inch deep over the normal top of the road, yet there be a 20 foot deep hole where the road used to be. As a developer, I have to ask myself something important when pushing out a program that could conceivably put someone's life in danger if my program gets it wrong (and in industrial controls, that's a lot of the time). What's my limit of liability here? In this case, if you roll out something where someone takes a photo of water over a road, and then drives through based on your program's answer and the road is washed out under the water and they die, you need to consider where the family is going to turn for recompense. In short, as someone who deals with liability concerns in programming, and also lives in an area where it's not super-rare to see water over roads (and washed out roads), I'd say this one is NOT worth the effort, to the extent that it's worth avoiding at all costs.
Hey, great video. Just a heads up - at 24:22 you mentioned that the green section above the confusion matrix's diagonal line represents the model's overestimation of water depth and the red section below the diagonal is an underestimation, but I'm pretty sure it's the other way around (assuming that the matrix rows correspond to actual values and columns represent the model's predicted values).
Here’s a good idea for this project. Download all the flood images and create an autoencoder decoder cnn. Then encode all the flood images and run it through a unsupervised learning model
Note: I haven't finished the video yet so I don't know if this is a solution you considered Why not have two light emitters on the wearable? One that can penetrate water, IE a weak blue laser of some sort, and one that doesn't penetrate water, like infrared. Then, you can determine the depth of the water by subtracting the distance between the reflection points. Also, using something similar to a radar gun, you could determine the speed the water is moving, and, if a small compass is on board, the direction it's moving relative to the user. Idk maybe it's a bit too hardware-oriented for a programming competition, but if you do decide to revisit this idea in the future, this would totally work and not be neural-network reliant.
Xanny ultraviolet, X-ray, and gamma rays are still options and gamma and xrays wouldnt last for more than a few billionths of a second so they would hurt anyone
i want to create an app that collects emotional input from everyone around the world translated into user status, that generates statistics that will help AI predict where chaos is likely to take place based on the Geogrphical Energy-Motion states collected. Everything happens for a reason, except how we choose to feel about it. but unfortunately i am no coder, i just love it.
Nice idea! My first thought when you talked about the problem was to strap a motion sensor to an arduino with a battery and put it in a waterproof container. When the ball is thrown, it goes to the bottom of the stream and time the amount of time it takes to hit the bottom. You now have your depth. Then, it collects the data from the motion sensor, does some quick math to determine the speed of the current, then sends it all to the user's phone as a text. Good idea using machine learning, but I think you might have gotten more accurate results with a physical data collection tool. Keep up the great videos! :)
You've probs already got this comment before but in keras you can use a data generator to load the images from storage rather than RAM. allowing you to train on larger datasets
the intro to the video gave me ideia of producing a drone that spot body heat, indentify body parts with AI or no body heat, just try to identify human parts among the rubble from different types of accidents or catastrophys
This is an awesome idea. I really hope this goes into production to test it out. I’m not too good at coding (I’m learning from you) so I will be standing by to see how this goes. Buutttt, all the best of luck and I can’t wait to see this. 👏🏼
you dont need any training data... just point safe waters at a stop sign, it can use the sign as a measuring stick to figure out how much is left and subtract it from the normal stop sign height
This is much more simple than you make it. Use GPS and Topo data, combined with route planning. Tap a button at the edge of the water, where you have to stop your vehicle, and it could use topo data and your route plan to determine depth, and plan a new route or do a risk assessment. This is useful for first responders (firemen, national guard ets.) in big trucks to reach stranded people.
Thank you SO much for these videos and your Python and C# course it's been helping me greatly trying to learn this stuff. Being able to see step by step is so helpful! You're the best!
Jabrils, you’ve participated in a couple game jams on your channel but what a lot of us really want is for you to host your own game jam. It won’t be much effort all you need is: a hype video announcing the date and rules, and itch.io page where people can upload games, a video announcing the theme on the day the jam starts, and a video showing the best games you played. Edit: looking over it now it could be a bit of effort
You could also (as a side-project?) check if water is deep enough for user to jump in head first. With your current version it shouldn't be a big problem to implement, and an app-based version would save a lot of lifes this way too.
At 11:30 , you didn't take the refractive index of water into play . Ground appears at 3/4th of it's original depth in reality due to RI. The images you used for training had the depth in picture=depth in reality which might feed to some errors in the application. I understand that the machine is being trained to tell the depth by analysing the surroundings but still the change in depth affects the depth in measurement
Hey will you please make some videos about " how did you learn 3d simulation programs" or in general " how did you learn 3d programs " or " what was your first 3d project"? .. the idea of gathering data from simulation was brilliant , it only comes up when u are expert in 3d and have alot of experiences on that .. anyway thanks for your channel , it took me a day watching all your videos ! The edditing is awsome as well
That was a very inspiring project for sure. The only question is how well does it perform in the real world. It used simulation data for both training and testing because the labeling problem with real world you talked about hasn't gone away so if you were to test it in the real world you'd need a bunch of real world labeled data to test against. Had you considered looking for real world images that had GPS tags in them then looking up the altitudes for those GPS locations? Similar to your ray casting solution you could look up the altitude of the GPS tag on the image. Then go out 5f, 10ft, 25ft, etc and measure those altitudes, subtract them, and get a depth the water would be away from the viewer. That alone would give you a good idea of how deep the water could get from where the person is standing (assuming the person is in relative safety where they stood to take the picture and they were reasonably close to the water). To get to fully labeled data you need an estimation of where the viewer stands relative to the edge of the water which means you'd have to understand where the water is in the picture (more neural networks), then using some field of view math on the camera you could figure out how far the water is from the viewer. Not simple, but doable. I thought it was a great video.
Thanks for this 😀 there are a few things that I'd kill for on the Mac version (like Twitch Stats and simple connectivity), but the one that's been staring me in the face all along is the hide audio source!
Man I think the key here is substantiating where the water *isn't*. The ML projections here were implicitly dependent on terrain topology but the initial focus was water surface which was effectively reduced to a plane. I want to say that the volumes of training data gravitated towards risk prediction from vertical features such as poles and buildings; feature density disrupting the 'plane' of water. What conclusions could we make from input data grouping? i.e. comparisons of an image batch from the urban section of your simulation vs. a group of peripheral locations. The potential of integrating this project with geospatial / gps / elevation datasets is viable. The terrain simulation approach was a great idea. Your video and explanations were awesome !!! Nice seaborn heatmaps ;) . best regards
Sign posts and traffic lights are probably standard heights, so it would be simpler to train a neutral net to measure water depth using them for reference
DAMN, that was a genius idea to just build a simulation holy heck! Idk how long it would've taken me to think of that and my perspective on solving this kind of problem is forever changed!
Phones have gyroscopes. You can use height and tilt to triangulate this. You could even implement this into a car to do it without user error / user's variability.
the music at 7:45 reminds me of the music in the one spongebob episode where he learns about the concept of ‚ people order our patties‘. I love that music
Wow, This video was so Amazing! You show us a lit of fascinating techniques😁 Great solution to a real life problem You've got a new subscriber 😄😄 Thank you so much!
1st) you don't need to put all images at once in a the gpu to train a neural net 2) you could have put them all in the gpu, jpeg can be just few kb (thanks to compression), so you could have put the raw jpeg in the gpu memory, then have a gpu kernel that convert that to raw pixel values then feed it to the neural network Then i'm writing my own gpu kernels, i don't really use neural net libs so i'm not about what are their limitations Other than that, great work ! :^)
I don't think I've ever seen someone sponsored by IBM. Big flex.
Just him and Linus tech tips as for what I've seen.
@Robo Cop It's not a joke...
@Robo Cop its legit you idiot.. he'd be fucked up if he make some joke like that
Because IBM doesn't really sell consumer products
My school is
Shoutout to Nick, the realest dude!
hey dani
yh
heyy dani!
oh hi
I guess I found you in a different comments section
Sup dani bruh bruh
I think your mute acting and voice over gives your channel a nice touch. Fantastic video, thanks Jabrils !
i think its panomiming
He just eats too much candy. Bad example for the kids.
Ryan Paaz which kids are watching this dumbass
The LoopDigger it deffff does!!!!
Thats basicly why i subbed
Simulating to produce data..... that's genius!
Hey Jabrils just wanted to say that your C# and Python series is awesome.
Thank you very much ^^
Where has the c# and python series ?
Ich Werde Marvin Genannt Hallo Marvin
@@timetotravel9147 ua-cam.com/video/6cYI3MSkxp8/v-deo.html
This is your most creative project yet. I love the simulation solution!
Dude, please PLEASE work with Mark rober again. The video was cool and I would love to see more
BRUH just stop making me want to eat candy!
Candy? Do you mean sweets
Minecraft Console Modder dumb shit
@@cheesydip270 Stop swearing in my minecraft christian server
@@animationspace8550 OH FUCK-umm I me-mean.. frick? uhhmm
@@cheesydip270 Ay man FUCK that guy! 😂 U do u fam!
Damn the simulation solution was amazing , good luck man
Jabrils and Iron Man are the only reasons I still care about learning programming and tech science. God bless you, Jab-s.
Maybe geo-location would add an extra layer of accuracy?
Extract GPS metadata in photos from over the web to gather information of non-flooded images compared to flooded images taken by the user at the 'scene'. Extracting the geo-location and mapping the image features of same gps co-ordinates and then to overlay images of the same location to get more accuracy.
I could probably spend more time explain that better! But I think you get it!
Imagine the cringiness when Jabrils does these face reactions and covers for his voice in complete silence
I hate it so much
ToyBatMations ok
Maybe he records the voiceover first then films using the audio
Amazing video my guy, for someone that knows programming basics, this was very interesting and informative
Really nice theoretical explanation and approach on how a problem has to be solve , I have graduated last year in Software Eng and men you did way better than most of my teachers
Great video! Super interesting yet entertaining (as usual)! Keep it up King Jabrils!
Just imagine, Jabrils chilling in his living-room for about 10 mins with his voice on repeat, flipping gestures at a camera... Beautiful
IBM slidin in them DMs
Really cool project!
The KSP and the wii music are too good. Great content, man!
Jabrils: trains ai to sea the water height
My brain: Mind blow
I'd definitely contribute data if this was crowdsourced. :)
This guy is underrated he deserves more subs
This is cool, so here's some advice that anyone is free to use and follow, too:
I think there is still some useful information about the scene, encoded in the color information, but not enough for high precision. So I think the training images should be converted into a color space with one luma channel, and two chroma channels. The catch is: The chroma channels can be reduced to a bit depth of 3 or 4, and reduced in resolution even more, by dividing the x and y resolutions by 2, to get ¼ the number of pixels. It doesn't need to stop there, either; you could probably reduce the luma channel to 6 or 7 bits per pixel, instead of the standard 8, to get just a little more space savings. Just remember, that with each bit you take off the bit depth, your dividing the color precision by 2; or in other words, 8 bits can store 256 values of color, but 7 bits will only store 128 values of color. If you decrease the bit depth of the luma channel, it may pay off to increase the overall resolution of the training image just a little, because there may so good information in the finer details of the shapes in the image. Then again, there may not be; so it might be good to experiment with both options.
I hope someone finds this useful!
That simulated data set got me. I thought you were actuating going to have to spend 16 years collecting flood data (or call off the project). Well played, J
**turns 12 am**
**enters office**
App: Threat level, Midnight
Thats the high quality content we need. Nice Video! :)
The only channel where I watch the adverts in full!!
I love you
@@Jabrils Thanks wish I had discovered your channel earlier. Keep up the good works
Hey Jabrils, I just discovered you via a Mark Rober video (stealing signs!). Mannn, I am impressed by your work. Granted, it is light years beyond my comprehension but I still love the work and how you present it. My guess is that you are (and will continue to be) an inspiration for younger kids to go into the sciences and maths. I wish I had enough time left on this earth (I'm 65) to be able to understand everything you talk about but I just wanted to say I will continue to watch your videos (and Marks) in hopes that SOME of it wears off on this old guy.
Cheers!
Jabrils, another great video. You put a lot of work into this. I hope to get as good as you in ML soon. This definitely gave me some inspiration to keep going!
Jabrils, your videos are so entertaining and you explain everything to the viewer really well! At times where I was confused on a certain term, you explain it in the video! You make watching and learning through these videos so much more fun and interactive! Keep doing what your doing!
I think you should first build a model to predict if there is water in the picture.
This model can be both used to preprocess the traning data and as a first layer on the app
This model can give a score on how confident it is that water is in the picture and function as a confidence value for the final prediction (i.e if the confidence level is low the app should ask the user for another photo instead of giving a bad prediction)
THANK YOU for addressing the importance of accurate labelled data. It's a thing that I almost never see discussed in ML projects and that's something I'm VERY concerned about.
TFW you're watching this video while waiting for your own ANN to finish training. :-| Great video Jabrils!
hey so, basically, in keras, if you have a large dataset, you should train using a generator. kerases fit_generator takes a generator as a arguemnt that should return the data of ONE image
also, make sure to include example of unflooded images, and surfaces that resemble water but aren't, like glass
the simulation solution was amazing JABRILS!!! Many people might learn something from you one day who gather a lot of images for training!!
I stay in India and there's a lot of flood going on, I am so glad that you posted a solution and I can use your brilliant idea and help the people of my country, a big thank you! Loved your vdo, always been a fan of your work.
This is a really cool project! As a CS student, I really appreciate the idea of a Progress Ladder. I'll definitely try to add this as a problem solving tool
The worse thing about this video is that it has an end...
Gz Jabrils!
When you said that you've gotten help from IBM I was astounded. It's really just readiness to do things like that to do them! I'll try my luck with my own ideas, thanks for that great video!
that music from golden eye n64 man... :D haha Excellent. Btw AWESOME videos!
You are one of the smartest person I have seen.....
Convolutional neural nets are what I'm primarily interested in getting into, so appreciated this vid. This channel in general is so informative and helpful.
Keep them coming bro, we appreciate your work
Love the content, love the concept, love the execution. Awesome work as always.
Before I get to it, I've gotta say I really like your videos- I've learned a lot watching you, and dig your style. I'm using your lessons to learn neural networks, and looking to implement them in my field, which is automated machine controls.
But here's a way to save a LOT of time on this one- make the app just put two buttons on the screen, and a question- "Is there water over the road?" The buttons will say "Yes" and "No". If the user clicks "Yes", change the screen to "Turn around, do not drive through". There is NEVER a safe time to drive through water over a road. If the water is moving, it's nigh insane to drive through it. Roads are not designed to be submerged, and even before the water makes it to the top of the road, it's starting to erode the foundation of the road. The water could be an inch deep over the normal top of the road, yet there be a 20 foot deep hole where the road used to be.
As a developer, I have to ask myself something important when pushing out a program that could conceivably put someone's life in danger if my program gets it wrong (and in industrial controls, that's a lot of the time). What's my limit of liability here? In this case, if you roll out something where someone takes a photo of water over a road, and then drives through based on your program's answer and the road is washed out under the water and they die, you need to consider where the family is going to turn for recompense. In short, as someone who deals with liability concerns in programming, and also lives in an area where it's not super-rare to see water over roads (and washed out roads), I'd say this one is NOT worth the effort, to the extent that it's worth avoiding at all costs.
Hey, great video. Just a heads up - at 24:22 you mentioned that the green section above the confusion matrix's diagonal line represents the model's overestimation of water depth and the red section below the diagonal is an underestimation, but I'm pretty sure it's the other way around (assuming that the matrix rows correspond to actual values and columns represent the model's predicted values).
This is actually true
@@smg950u thanks for showing me this sub lol
Here’s a good idea for this project. Download all the flood images and create an autoencoder decoder cnn. Then encode all the flood images and run it through a unsupervised learning model
Note: I haven't finished the video yet so I don't know if this is a solution you considered
Why not have two light emitters on the wearable? One that can penetrate water, IE a weak blue laser of some sort, and one that doesn't penetrate water, like infrared. Then, you can determine the depth of the water by subtracting the distance between the reflection points. Also, using something similar to a radar gun, you could determine the speed the water is moving, and, if a small compass is on board, the direction it's moving relative to the user.
Idk maybe it's a bit too hardware-oriented for a programming competition, but if you do decide to revisit this idea in the future, this would totally work and not be neural-network reliant.
Dirty water would probably make that harder
Xanny ultraviolet, X-ray, and gamma rays are still options and gamma and xrays wouldnt last for more than a few billionths of a second so they would hurt anyone
Mason Fuller takes a lot of energy tho.
Maybe it needs to be affordable, other that rescue services who can afford proper hardware.
Hello fellow factorian, factorio got me to think like a coder. haha
really nice example about precision vs recall !!
i want to create an app that collects emotional input from everyone around the world translated into user status, that generates statistics that will help AI predict where chaos is likely to take place based on the Geogrphical Energy-Motion states collected.
Everything happens for a reason, except how we choose to feel about it.
but unfortunately i am no coder, i just love it.
Hey @Jabrils you are the best man. This video motivated me to use software to solve world problems
Nice idea! My first thought when you talked about the problem was to strap a motion sensor to an arduino with a battery and put it in a waterproof container. When the ball is thrown, it goes to the bottom of the stream and time the amount of time it takes to hit the bottom. You now have your depth. Then, it collects the data from the motion sensor, does some quick math to determine the speed of the current, then sends it all to the user's phone as a text. Good idea using machine learning, but I think you might have gotten more accurate results with a physical data collection tool. Keep up the great videos! :)
The Confusion Matrix Part is the best learning resource for learning it!! ! mind blown! Keep up the work! 💞💓
You've probs already got this comment before but in keras you can use a data generator to load the images from storage rather than RAM. allowing you to train on larger datasets
Being sponsored by IBM is the biggest flex I have ever seen
the intro to the video gave me ideia of producing a drone that spot body heat, indentify body parts with AI or no body heat, just try to identify human parts among the rubble from different types of accidents or catastrophys
Yo, this is a really great idea!
@@theredefinedprogrammer2073 wanted this idea to reach him, so far no good.
Leopoldo Ferreira de Paula hey man there is always an IBM Drone Drop 2020! Why don’t you and I look into it?!
Yeah man if you want. Up to you. Im very interested in this
This is a great idea!
This is an awesome idea. I really hope this goes into production to test it out. I’m not too good at coding (I’m learning from you) so I will be standing by to see how this goes. Buutttt, all the best of luck and I can’t wait to see this. 👏🏼
you dont need any training data... just point safe waters at a stop sign, it can use the sign as a measuring stick to figure out how much is left and subtract it from the normal stop sign height
... 'miming with voice-over' is so entertaining and very addictive.
Feeling nostalgic about Grand Turismo 3 now. (The music in the beginning is from GT 3)
Scrolled down for this
This is much more simple than you make it. Use GPS and Topo data, combined with route planning. Tap a button at the edge of the water, where you have to stop your vehicle, and it could use topo data and your route plan to determine depth, and plan a new route or do a risk assessment. This is useful for first responders (firemen, national guard ets.) in big trucks to reach stranded people.
Please go on with this project. The simulation was a great idea.
Amazing video again, thank you, Jabrils! And I loved “percision” :)
this project is crazy. it seems as if EVERY element of it is something you didnt know how to do at all
You should have a million followers. Congrats from Brazil.
that's some serious black mirror material content
keep on the good work jabrils
5:10 what is that lit-ass music you are using
The content is awesome the comment section is even better thank you dude for creating great content
Thank you SO much for these videos and your Python and C# course it's been helping me greatly trying to learn this stuff. Being able to see step by step is so helpful! You're the best!
Thank you breaking down logistics!
The small visuals help out a lot too! I’m learning a lot from just watching two of your videos 👍
Jabrils, you’ve participated in a couple game jams on your channel but what a lot of us really want is for you to host your own game jam.
It won’t be much effort all you need is: a hype video announcing the date and rules, and itch.io page where people can upload games, a video announcing the theme on the day the jam starts, and a video showing the best games you played.
Edit: looking over it now it could be a bit of effort
+1 for shower thoughts.
I do this too!
You could also (as a side-project?) check if water is deep enough for user to jump in head first. With your current version it shouldn't be a big problem to implement, and an app-based version would save a lot of lifes this way too.
At 11:30 , you didn't take the refractive index of water into play . Ground appears at 3/4th of it's original depth in reality due to RI. The images you used for training had the depth in picture=depth in reality which might feed to some errors in the application.
I understand that the machine is being trained to tell the depth by analysing the surroundings but still the change in depth affects the depth in measurement
I love that there's Goldeneye 64 music in parts of this video!
Hey will you please make some videos about " how did you learn 3d simulation programs" or in general " how did you learn 3d programs " or " what was your first 3d project"? .. the idea of gathering data from simulation was brilliant , it only comes up when u are expert in 3d and have alot of experiences on that .. anyway thanks for your channel , it took me a day watching all your videos ! The edditing is awsome as well
That was a very inspiring project for sure. The only question is how well does it perform in the real world. It used simulation data for both training and testing because the labeling problem with real world you talked about hasn't gone away so if you were to test it in the real world you'd need a bunch of real world labeled data to test against. Had you considered looking for real world images that had GPS tags in them then looking up the altitudes for those GPS locations? Similar to your ray casting solution you could look up the altitude of the GPS tag on the image. Then go out 5f, 10ft, 25ft, etc and measure those altitudes, subtract them, and get a depth the water would be away from the viewer. That alone would give you a good idea of how deep the water could get from where the person is standing (assuming the person is in relative safety where they stood to take the picture and they were reasonably close to the water). To get to fully labeled data you need an estimation of where the viewer stands relative to the edge of the water which means you'd have to understand where the water is in the picture (more neural networks), then using some field of view math on the camera you could figure out how far the water is from the viewer. Not simple, but doable. I thought it was a great video.
Thanks for this 😀 there are a few things that I'd kill for on the Mac version (like Twitch Stats and simple connectivity), but the one that's been staring me in the face all along is the hide audio source!
Great job and very entertaining to watch. Looking forward to viewing more of your videos!
Man I think the key here is substantiating where the water *isn't*.
The ML projections here were implicitly dependent on terrain topology but the initial focus was water surface which was effectively reduced to a plane.
I want to say that the volumes of training data gravitated towards risk prediction from vertical features such as poles and buildings; feature density disrupting the 'plane' of water.
What conclusions could we make from input data grouping? i.e. comparisons of an image batch from the urban section of your simulation vs. a group of peripheral locations.
The potential of integrating this project with geospatial / gps / elevation datasets is viable.
The terrain simulation approach was a great idea. Your video and explanations were awesome !!!
Nice seaborn heatmaps ;) .
best regards
Absolutely amazing. Very well-made video.
Sign posts and traffic lights are probably standard heights, so it would be simpler to train a neutral net to measure water depth using them for reference
DAMN, that was a genius idea to just build a simulation holy heck! Idk how long it would've taken me to think of that and my perspective on solving this kind of problem is forever changed!
Big thumbs up for that nostalgic Gran Turismo soundtrack
Phones have gyroscopes. You can use height and tilt to triangulate this. You could even implement this into a car to do it without user error / user's variability.
Best ML tutorial on UA-cam, kudos
No one has mentioned the 007 Goldeneye music in the background....LOVED IT haha
My guy this project actually seems pretty cool. I would like to see you continue it. Also maybe try testing it on real life images
the music at 7:45 reminds me of the music in the one spongebob episode where he learns about the concept of ‚ people order our patties‘. I love that music
Loving the video so far!
Everyone is getting sponsored by audible and honey and my boy Jabril over here getting sponsored by fucking IBM.
1:18 I haven't heard that sound in years.
Wow, This video was so Amazing! You show us a lit of fascinating techniques😁
Great solution to a real life problem
You've got a new subscriber 😄😄
Thank you so much!
13:38 I think you meant to say 2,073,600 pixels here. 6,220,800 is the amount of color values, 3 for each pixel: Red, green and blue.
Brethorn I know, but calling each color value a pixel is incorrect.
I love your videos. Bond 64 music brought a nostalgic tear to my eye :)
The best coding oriented channel by far!
I really can't stop imagining him gesturing to the camera without sound, thinking of what he is going to say, but not actually saying it
..came for the candy, stayed for the science
Thank you for the explanation about the confusion matrix graph! Really awesome.
1st) you don't need to put all images at once in a the gpu to train a neural net
2) you could have put them all in the gpu, jpeg can be just few kb (thanks to compression), so you could have put the raw jpeg in the gpu memory, then have a gpu kernel that convert that to raw pixel values then feed it to the neural network
Then i'm writing my own gpu kernels, i don't really use neural net libs so i'm not about what are their limitations
Other than that, great work ! :^)
Dude!!! you should be the most subscribed one on youtube