You have excellent communication on how the model is implemented. The clarification is enormous. This is the best YT video I have seen for explaining the layers. I can't think of one I saw (and I saw many) that clearly helped me understand the layers of a model like yours does. Thank you. I am a PhD student in Data Science at Arizona State. We are studying U-Net for a graduate course on medical image processing. I also need it for my dissertation topic.
Uygar hocam selam, ben modelinizi çalıştırdım fakat çıktıda predicted mask tamamen siyah olmaktadır. Aynı dataseti kullanıyorum. Bu datasette doğru çalıştırabilirsem başka bir datasette proje yapmama gerekiyor. sebebi ne olabilir?
The code reads okay but RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 64, 512, 512] to have 3 channels, but got 64 channels instead Can't seem to figure what's going wrong!
Thank you very much Uygar! I loved this tutorial, really helpful, carry on! I was also curious about the amount of memory your GPU has. Could you please share that information?
Hello Uygar, I wanted to ask you something. I tried to replicate your experiment in Kaggle, using GPU P100 but I am having memory problems, CUDA runs out of memory. What proceduce do you recommend if this happens? (Like putting num_workers=4 or something like this) Thank you very much! @@uygarkurtai
Nice work. Please, my work entails using a similar model to segment multiple parasitized cells from the uninfected cells in a microscope slide image. any hint will be appreciated, please.
Hallo, danke Ihnen für dieses Video . ich habe aber nicht verstanden,woher haben Sie die manual_test und manual_test_mask bekommen . bei der daten ordner haben wir nur die"test.zip",test_hq.zip",,"train.zip","train_hq.zip","train_mask.zip" ich habe schon immer error für die path des manual_mask und manual_test_mask beim die Inferance Teil .
Hey, thank you! I got them from Kaggle competition. I supposed to have given the link in the video. I supposed to be showing the competition page too. You can just download from there.
i am getting this error when training any idea how to fix this Given groups=1, weight of size [512, 1024, 3, 3], expected input[1, 512, 476, 18] to have 1024 channels, but got 512 channels instead
@@uygarkurtai I am working on a school project that has data as a hyperspectral image need to perform semantic segemntation using UNet with pytorch are you open to work on it privately or help me out solve the errors that am receiving?
Hi, great video! It already helped me a lot! Thanks! I'll try to train a U-Net with my own data. Can you tell me if there is anything else I need to be aware of? Does it matter what kind of data files (jpg, png, gif or tiff) I use or do they have to be jpg files + gif files with labels for the training?
I like this video , so clear , and I was able to follow and do the same thing. I understook Unet thanks to you . Thank you so much . Could you do a video on DDPM ? specially conditional or unconditional DDPM using Unet ? Thanks a million.
i've trained your model through 30 epoch, but when i use model to predict It's too bad !!! then i continue to train 10 epoch more, but it's still to bad, i don't know why, i use your dataset, i use your scripts;
@@TinLee99 that usually happens when you use a different dataset with different image channels. In that case you have to do slight modifications in your code. Are you using a different dataset?
@@uygarkurtai i use pet dataset, i trained each through 30 epoch and i can see result, but when i modify upsample from transposedv2 to upsample of pytorch, i see my result that is better, i don't no why
one of the most cleanest code, straight forward logic, excellent explanation. You've earned my respect, like, subscription. Thank you sensei.
Thank you, I'm flattered!
You have excellent communication on how the model is implemented. The clarification is enormous. This is the best YT video I have seen for explaining the layers. I can't think of one I saw (and I saw many) that clearly helped me understand the layers of a model like yours does. Thank you. I am a PhD student in Data Science at Arizona State. We are studying U-Net for a graduate course on medical image processing. I also need it for my dissertation topic.
@@manuelsteele7755 thank you for your kind words :) I'm glad it helped!
great work. Keep up the quality training free, lots of us learning with your videos. 👍
Thank you :)
Amazing!!! its so awesome!! YOU ARE MY GOD!!!😍
Thank you!
Great to find a source about this unique topic, thanks for your efforts and great teaching 🥰
Thank you :)
Thank you.
Great Work! Simple, step by step!
Do you also plan to implement 3D Unet in tensorflow?
Thank you! I don't think I'll get into the variations of U-Net.
Do we need to add .UNSQUEEZE(1) to mask = img_mask[1].float().to(device) and why?Thanks!
@@ElminsterWindwalker unsqueeze adjusts the dimensions of the tensors. If it works we don't need to.
Uygar hocam selam, ben modelinizi çalıştırdım fakat çıktıda predicted mask tamamen siyah olmaktadır. Aynı dataseti kullanıyorum. Bu datasette doğru çalıştırabilirsem başka bir datasette proje yapmama gerekiyor. sebebi ne olabilir?
merhaba, data preperation yaparken bir yeri atliyorsun yuksek ihtimal. Image channellar'i karistirdiginda bu durum oluyor cogunlukla.
It was a great video👏👏
Thank you :)
You are great!
Adamsın :)
Thank you! Tesekkurler :)
The code reads okay but
RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 64, 512, 512] to have 3 channels, but got 64 channels instead
Can't seem to figure what's going wrong!
you have a shape mismatch error. You have to change variables according to your input data.
Thank you very much Uygar! I loved this tutorial, really helpful, carry on!
I was also curious about the amount of memory your GPU has. Could you please share that information?
Thank you! I used Kaggle's P100 GPU. It has 16GB if I'm not wrong.
Hello Uygar, I wanted to ask you something.
I tried to replicate your experiment in Kaggle, using GPU P100 but I am having memory problems, CUDA runs out of memory.
What proceduce do you recommend if this happens? (Like putting num_workers=4 or something like this)
Thank you very much! @@uygarkurtai
Is it possible thath you use images with higher resolution? In that case you need a bigger gpu or you can try quantization or something@@FernandoPC25
I think that I am using the same dataset than you. I will try with the different GPUs of kaggle notebook. Thank you very much sensei!
Nice work. Please, my work entails using a similar model to segment multiple parasitized cells from the uninfected cells in a microscope slide image. any hint will be appreciated, please.
Hey. There're much more up-to-date models. If you want to use segmentation on a project I suggest you check them out.
@@uygarkurtai Thanks for your timely response, it means a lot to me. Would you please recommend videos or material for me?
@@afolabiowoloye804 It's a pleasure. I found this repo. You can check it out. github.com/mrgloom/awesome-semantic-segmentation
@@uygarkurtai Thanks, a lot
@@uygarkurtai such a great repo. tnx
Hallo,
danke Ihnen für dieses Video . ich habe aber nicht verstanden,woher haben Sie die manual_test und manual_test_mask bekommen .
bei der daten ordner haben wir nur die"test.zip",test_hq.zip",,"train.zip","train_hq.zip","train_mask.zip"
ich habe schon immer error für die path des manual_mask und manual_test_mask beim die Inferance Teil .
Hey, thank you! I got them from Kaggle competition. I supposed to have given the link in the video. I supposed to be showing the competition page too. You can just download from there.
我也遇到相同问题了 请问你是怎么解决的
I have an error its No such file or directory as i put the paths of image and mask.
Can you help me solving this error?
hey. Probably you typed a wrong path to images and masks. Doulbe check please
i am getting this error when training any idea how to fix this
Given groups=1, weight of size [512, 1024, 3, 3], expected input[1, 512, 476, 18] to have 1024 channels, but got 512 channels instead
Hey. What's your input image size?
@@uygarkurtai It's a hyperspectral cube of size 349x1905x144 with 15 classes.
@uygarkurtai it's a hyperspectral cube of size 349×1905×144 and 15 output classes
@@shoaibshafiahmed1983 you got to resize your image or modify the model parameters accordingly in that case
@@uygarkurtai I am working on a school project that has data as a hyperspectral image need to perform semantic segemntation using UNet with pytorch are you open to work on it privately or help me out solve the errors that am receiving?
Hi, great video! It already helped me a lot! Thanks!
I'll try to train a U-Net with my own data. Can you tell me if there is anything else I need to be aware of? Does it matter what kind of data files (jpg, png, gif or tiff) I use or do they have to be jpg files + gif files with labels for the training?
Great to hear that! You can choose any format as long as you can load it.
I like this video , so clear , and I was able to follow and do the same thing. I understook Unet thanks to you . Thank you so much . Could you do a video on DDPM ? specially conditional or unconditional DDPM using Unet ? Thanks a million.
@@Samuel-san-x9x thank you! I already have a DDPM video. Check it out here: ua-cam.com/video/LGe0xhRseeg/v-deo.htmlsi=QSAnwVGYrL5Vdafz
@@uygarkurtai thank you.
thanks so much
Thank you!
Great video thanks
Thank you :)
Amazing! Thnks for sharing your knowledge and skillsksfsdf
Thank you :)
Thanks
thank you :)
i've trained your model through 30 epoch, but when i use model to predict It's too bad !!! then i continue to train 10 epoch more, but it's still to bad, i don't know why, i use your dataset, i use your scripts;
Me too!
@@ElminsterWindwalker i have fixed, his code is perfect, It was my fault
@@TinLee99 that usually happens when you use a different dataset with different image channels. In that case you have to do slight modifications in your code. Are you using a different dataset?
@@uygarkurtai i use pet dataset, i trained each through 30 epoch and i can see result, but when i modify upsample from transposedv2 to upsample of pytorch, i see my result that is better, i don't no why
Tebrikler ve paylaşım için teşekkürler. Oldukça açıklayıcı ve destekleyici bir paylaşım. Ben de hemen beğenip abone oluyorum. :)
Çok teşekkür ederim :)