210 - Multiclass U-Net using VGG, ResNet, and Inception as backbones

Поділитися
Вставка
  • Опубліковано 24 січ 2025

КОМЕНТАРІ • 159

  • @mohamadkhosravi7408
    @mohamadkhosravi7408 Рік тому +3

    One of the most efficeint videos that I have ever watched. THANKS

  • @ottobena
    @ottobena 3 роки тому +4

    Excellent tutorial. Simple and precise. The way you go back to the basics in between is why I like your videos more than some of the other popular channels.

  • @ananyabhattacharjee4217
    @ananyabhattacharjee4217 3 роки тому +1

    I regret why I haven't found this channel before. After 4 months, I am looking into this video that didn't allow me to skip even a single second of yours content. Thank you sir.
    Can you make your next video on attention networks on medical imaging??? Please, it will definitely help many subscribers of yours.

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      That is the plan. Please watch my video number 225 on attention Unet. Also, stay tuned for video 226 which will be released this Wednesday (July 14th, 2021).

    • @ananyabhattacharjee4217
      @ananyabhattacharjee4217 3 роки тому

      @@DigitalSreeni Yes sir, I have watched it. Thank you.

  • @rafamichalczyk6500
    @rafamichalczyk6500 8 місяців тому

    Dziękujemy.

  • @deepblender
    @deepblender 3 роки тому

    Ï just found your channel and after watching a few videos, I only came to your most recent one to say thank you! This is a great a highly underrated channel!

  • @aggreym.muhebwa7077
    @aggreym.muhebwa7077 3 роки тому +2

    Thank you for the amazing explanations! I am really grateful for the way you are able to explain complex concepts in an easy way!

  • @konstantin6482
    @konstantin6482 3 роки тому +6

    Awesome stuff man!
    Really like how you go over the smallest details. I'm a somewhat experienced coder and I just set the speed a little higher)
    Waiting for a video about UNet++ !!!
    P.S. you can just use sparse categorical cross entropy and then there is no need for the one hot encoding

  • @caiyu538
    @caiyu538 3 роки тому

    Great series, I keep on learning from your tutorial. Thumb it up.

  • @JoseOliveira-lz1up
    @JoseOliveira-lz1up 2 роки тому

    Your content is always awesome, keep going with your stuff!

  • @shashankkaryakarte8463
    @shashankkaryakarte8463 2 роки тому

    Shreeni, you are special person sent by god to save many like me..... Thanks a lot.....

  • @fuegopuro5933
    @fuegopuro5933 3 роки тому +1

    You are the game changer sir!

  • @yusakaya9950
    @yusakaya9950 Рік тому

    Thanks!

  • @ZhiGangMei
    @ZhiGangMei 3 роки тому +3

    What version of tensorflow and Keira’s did you use? I have trouble to run the example provided by segmentation models in its GitHub using tensorflow 2.4.

    • @sam_d-z4g
      @sam_d-z4g 3 роки тому

      Me too. do you fix the problem ?

  • @salarghaffarian4914
    @salarghaffarian4914 3 роки тому

    Your videos are very useful. So many thanks for your efforts.

  • @temiwale88
    @temiwale88 Рік тому

    I love finding gems! You are and Thank You!

  • @jacobusstrydom7017
    @jacobusstrydom7017 3 роки тому +1

    Great job as always!! You really have a gift for explaining complex topics and make it understandable.

  • @tilkesh
    @tilkesh 21 день тому

    I am not sure about CGPT but I am sure apart from us Gemini has also learned from you by watching your videos and taking codes from Github. So you also have a virtual AI student.

  • @mimolinodeviento
    @mimolinodeviento 3 роки тому +11

    As helpful as always!! I will be benefiting from many of your videos to develop my master's final proyect, how should I acknowledge (or cite) your inspiring hard work?

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +4

      Thanks for inquiring about acknowledgement. You can just mention my name (Dr. Sreenivas Bhattiprolu) and my UA-cam channel name: DigitalSreeni

    • @mimolinodeviento
      @mimolinodeviento 3 роки тому

      @@DigitalSreeni thank you! I will definitely do that :)

    • @jaddyroot
      @jaddyroot 3 роки тому

      @@DigitalSreeni Hello Screeni. Thanks a lot for you materials. It's all a very helpful for me. Now I'm trying to adapt your code for my personal python microscopic images project, but I've faced the problem with models compilation stage: I can't find correct Keras version to make "segmentation_models" library works. I use Google Colab, but preinstalled Keras version is not good for "segmentation_models", so I tried to install specific ones, but as I said I've still got compilataion problems. Could you please provide which Keras version did you use in this video on your local machine? Thank you.

  • @vladimirserg7567
    @vladimirserg7567 3 роки тому

    Thank you very much! This is very useful content, you are doing a good job

  • @MohamedAbomokh
    @MohamedAbomokh 3 роки тому

    Thank you sir for explaining :)
    really needed this

  • @pureeight7003
    @pureeight7003 3 роки тому +1

    you made me laugh at 32:20 "changing gray level to color ...assuming most of you are humans" .... hahahahahaha. I have subscribed to your channel already.

  • @surflaweb
    @surflaweb 3 роки тому

    Really helpful library. Thanks

  • @fuegopuro5933
    @fuegopuro5933 3 роки тому +1

    Wow! Can you go on lits or brats?

  • @felipecordeiro8531
    @felipecordeiro8531 11 місяців тому

    Did you split the one big image into smaller 128x128 pieces? That's some kind of alternative way for to avoid the high cost computional of gpu? Instead of training using the original image you use the "patches" of original image, like that?

  • @vimalshrivastava6586
    @vimalshrivastava6586 3 роки тому

    Superb explanation 👌👌

  • @eli_m6556
    @eli_m6556 3 роки тому

    Great video as always

  • @unamattina6023
    @unamattina6023 2 роки тому

    the given dataset in the description is not the same as yours in this video, what should I do?

  • @mahshidbenchari350
    @mahshidbenchari350 Рік тому

    Thank you for the video. is it possible to let me know how we can download the dataset.

  • @amnesie148
    @amnesie148 3 роки тому

    Hi Sreeni , I have a further question, how did this replacement backbone for U-Net work? Is it possible to say that when we determine the encoder for example 'vgg16' , A few more fully connected layers compared to the original Unet in encoder path. and in the decoder part, the sm library will automatically determine a same but symmetric decoder for me. BUT the whole network architecture remains in the Unet shape. Am I right? or if there is some paper(s) for this sm libray I would like to read it . Thanks in advance!

  • @gerokatseros
    @gerokatseros 2 роки тому

    what tensorflow and keras versions are you using? is there an image or a configuration file that we can use to make it the same as yours so that the program will run? maybe put some info in your github?

  • @ShakirKhan-th7se
    @ShakirKhan-th7se 2 роки тому

    If we want to train the backbone as a Resnet101 instead of a Resnet34, then do we only need to change the model fit function, or do we have to make some other changes?

  • @abrahammulat8002
    @abrahammulat8002 3 роки тому

    Thank you for the great video, very helpful

  • @texasfossilguy
    @texasfossilguy 2 роки тому

    Is there a video just for breaking apart large images and masks into smaller parts? I would love to see that code on how that is done to splice images and keep their identifiers.

  • @rs9130
    @rs9130 3 роки тому

    Hello Sreeni,
    How to set the formula to calculate individual class iou, if my total number of class is more upto 40

  • @suaiburrahman3582
    @suaiburrahman3582 2 роки тому

    Great job indeed ❤️

  • @vidyapatil6445
    @vidyapatil6445 2 роки тому

    I use .jpg image while appending images in train_images I am getting error as AttributeError: 'numpy.ndarray' object has no attribute 'append'.

  • @MuhammadKhan-zu6fx
    @MuhammadKhan-zu6fx Рік тому

    Great Explanation.

  • @sarahlevy9464
    @sarahlevy9464 3 роки тому +1

    Hello, thanks for the video, i would like to know if it is possible to applicate segmentation_models on images with shape 1500 by 1500?

    • @konstantin6482
      @konstantin6482 3 роки тому

      Lmao nooo
      Try to calculate the number of weights in the first layer

  • @NisseOhlsen
    @NisseOhlsen 3 роки тому

    Thank you very much! Question: How are the skip connections between the BACKBONE encoder and the U-Net decoder realized ? Is that hard coded, since they know the specific layer architectures of each network?

    • @konstantin6482
      @konstantin6482 3 роки тому +1

      Yes, they basically have a dictionary with the name of a backbone as a key and a list of layer names as a value. They either use the weights or the shapes of these layers during the upsampling.
      You can always clone the repo and see it for yourself with Pycharm and ctrl + B command ;)

  • @منةالرحمن
    @منةالرحمن 3 роки тому

    does'nt work any more.. please check for us the installed versions of keras and tensorflow
    error while importing librairies
    AttributeError: module 'tensorflow_core.compat.v2' has no attribute '__internal__'

  • @mehdialibegli8233
    @mehdialibegli8233 2 роки тому

    Hello, can we replace ResNet-50 with a U-Net in BraTS 2020? I used it but have an error in ResNet layers.

  • @applejuice5785
    @applejuice5785 3 роки тому

    what if my image has different sizes I cant find a video from you where you cover it

  • @tahirak.7565
    @tahirak.7565 11 місяців тому

    You can save lives uno 😅❤️ God's fvrt absolutely.

  • @baransanatitarrah2417
    @baransanatitarrah2417 Рік тому

    thanks for your video. The whole code is working well on my training data but my validation loss doesnt improve and the final test results arent compelling, I applied L2 regulazation and also data augmentation but it doesnt become better what do you think might be the problem? :(

  • @sauravsubudhi8442
    @sauravsubudhi8442 3 роки тому

    Hi, i am getting AttributeError: module 'keras.backend' has no attribute 'observe_object_name' error.

  • @juansebastian6284
    @juansebastian6284 3 роки тому

    Hi Sreeni,
    Great job. These videos are really helpful for me :)
    I have one question: is it possible to use ImageDataGenerator for image augmentation with this method?
    I ask it because the preprocessing is made automatically, and it depends of which model we are using.
    Another doubt I have is: the decoding path is like a mirror of the encoder? or we are only changing the encoder, and the decoder is still like a classical UNet?
    I hope someone may help me :)

  • @mohammadrajabi3152
    @mohammadrajabi3152 3 роки тому

    Thank you very much for the video. i have a problem with very low mean iou after training, i have 1000 image dataset with 20 classes and the model predict very bad. how can i solve it? my dataset is railway scene though. any help would be appreciated.

  • @iulencabeza6454
    @iulencabeza6454 3 роки тому

    Hi Again Sreeni.
    Do you really recommend to change the grayscale images to RGB to use pre-trained models?
    Is not a better solution wroking from scracth?
    Best

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      If you have enough training data and resources to train then starting from scratch is preferable. If you believe pre-trained models are going to help then one way to apply them on grayscale images is by converting them to RGB by duplicating the channels. If you really do not want to that but still want to benefit from transfer learning then you can change the weights of the first convolutional layer. Here is the description: ua-cam.com/video/5kbpoIQUB4Q/v-deo.html

  • @tasleemmustafa6005
    @tasleemmustafa6005 2 роки тому

    Hi sareeni, very very informative video. Can we use this code for binary semantic segmentation ,in which we have only two classes background(0) and foreground(1), by replacing the n_classes=2.

  • @youstuffs5
    @youstuffs5 3 роки тому

    Your videos are of really great help.
    I am trying to use pre-trained ResNet as backbone for u-net (with some modified skip connections).
    I believe, for my purpose, the "sm.Unet" using the library will not work.
    Could you please provide some hint/direction?

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      Why do you believe Unet will not work for you? I recommend using vgg16 backbone.

  • @moumitamoitra1829
    @moumitamoitra1829 3 роки тому +1

    I really like all of your videos. Moreover all videos are arranged in a systematic order, which is really helpful. I appreciate your effort.
    I have one request, could you please make some videos on classification using U-Net with VGG, ResNet, and Inception. Thanks

  • @angelceballos8714
    @angelceballos8714 3 роки тому

    How can I export all the masks in my dataset in Apeer? Do you need to manually download one by one?

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      Can you please ask APEER questions to their support: support@apeer.com? I normally combine all my images into a tiff stack so it makes it easy to handle. So after labeling images I can just download a single tiff stack. I never tried multiple separate images, may be you need to download one by one which sucks but you better make sure by asking the support.

  • @goodnewsamieghemen4733
    @goodnewsamieghemen4733 3 роки тому

    You're amazing! Thanks

  • @caiyu538
    @caiyu538 3 роки тому

    I want to do some object detections on CT/MRI images, Is it possible to have tutorials on 3D object detection?

  • @mdhafizurrahman5386
    @mdhafizurrahman5386 3 роки тому

    Can you please tell me why am I getting follow error while running the training?
    ValueError: Dimension 1 in both shapes must be equal, but are 32 and 31. Shapes are [1,32,32] and [1,31,31]. for '{{node model_1/decoder_stage1_concat/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](model_1/decoder_stage1_upsampling/resize/ResizeNearestNeighbor, model_1/stage3_unit1_relu1/Relu, model_1/decoder_stage1_concat/concat/axis)' with input shapes: [1,32,32,256], [1,31,31,128], [] and with computed input tensors: input[2] = .

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      You seem to be trying to concatenate arrays of different shape. Not sure how and why you got arrays of different shape but the error indicates that you are trying to concatenate 1,32,32 and 1,31,31 arrays.

    • @connielee4359
      @connielee4359 3 роки тому

      Hi, have you solved this problem? I got similar error and have know idea how to deal with it.

  • @biplugins9312
    @biplugins9312 3 роки тому

    I have managed to find the camVid data. It is 480*360 which gives a patch size of 120*120. Maybe I should resize this to 128*128 to keep things clean?
    Also the camVid has 12 labels. Could this be took many? Maybe I need to choose 4 out of the 12?
    (I didn't see that you had responded before editing. In the audio you referred to the annotation part, which I knew was apeer.com.)

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      The dataset can be downloaded from here: drive.google.com/file/d/1HWtBaSa-LTyAMgf2uaz1T9o1sTWDBajU/view
      To annotate your own images you can go to www.apeer.com

  • @deepi8787
    @deepi8787 3 роки тому

    Hai your videos are awesome… help me with this question , can I give an feature extracted and compressed image as an input image to the unet for binary segmentation … waiting for your valuable reply

  • @chhavirajchauhan9898
    @chhavirajchauhan9898 3 роки тому

    How can we use UNET for 1D signals?

  • @internetexplorer7880
    @internetexplorer7880 3 роки тому

    What should I do if my masks are in rgb format. Can someone help me with any solution or maybe links to resources that uses rgb masks

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      Watch out for my next python tips and tricks video...

  • @zakiyaazizahcahyaningtyas3480
    @zakiyaazizahcahyaningtyas3480 3 роки тому

    Can we use this for 3d segmentation? Because as far as I know those backbones were built for 2d images. It'll be really helpful if there is the 3d version of those backbones

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      Yes. Please watch my video: ua-cam.com/video/Dt73QWZQck4/v-deo.html

  • @rezatabrizi4390
    @rezatabrizi4390 3 роки тому

    hi thanks for your effort i like all your videos
    i have one problem in my case after the installed segmentation model in the google colab when i want to import segmentation_models as sm i have this error massage : module 'keras.utils' has no attribute 'generic_utils
    how can i import in the Google colab please ?

    • @mdhafizurrahman5386
      @mdhafizurrahman5386 3 роки тому +1

      run this before import all libraries: %env SM_FRAMEWORK=tf.keras

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      Importing segmentation models library may give you generic_utils error on TF2.x
      If you get an error about generic_utils...
      Option 1:
      change
      keras.utils.generic_utils.get_custom_objects().update(custom_objects)
      to
      keras.utils.get_custom_objects().update(custom_objects)
      in
      .../lib/python3.7/site-packages/efficientnet/__init__.py
      Option 2 (especially for Google Colab):
      Work with Tensorflow 1.x
      In google colab, add this as your first line.
      %tensorflow_version 1.x
      (Or just create a new environment in your local IDE to use TF1.x)

  • @nouhamejri1698
    @nouhamejri1698 3 роки тому

    good job , im trying to do semantic segmentation using unet and backbones but I'm finding loss 0.8 and doesn't decrease during training, any ideas about the reason?

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      Many potential reasons... first please ensure you divide your pixel values by 255 to scale them between 0 and 1. Then, make sure you preprocess using whatever backbone you have used. Then, try different loss functions to see if there is an improvement. Finally, experiment with optimizers.

    • @nouhamejri1698
      @nouhamejri1698 3 роки тому

      @@DigitalSreeni scaling and preprocessing is done , i will try other loss functions , thanx a lot

    • @punamsarmah3436
      @punamsarmah3436 3 роки тому

      Import segmentation model as sm is not working in Collab..can anyone help me please..it is showing error

  • @funentertainment8207
    @funentertainment8207 3 роки тому

    i have a question?
    i have dataset of almost 17K images, should i make 17K mask as well?

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      If you can make 17k masks then why do you even need Unet? The masks are there to teach the algorithm what ground truth looks like. You start by annotating (creating masks) a handful of images and segmenting your 17k images. If the results are not satisfactory, you pick images where the segmentation is failing very bad and annotate them and the masks to your collection of ground truth masks. You then try segmenting your images again. This is an iterative process and you will realize that after a few iterations your model is generalized enough to segment your 17k images with acceptable accuracy.

    • @funentertainment8207
      @funentertainment8207 3 роки тому

      @@DigitalSreeni thank you for your reply,
      what you suggest me , or in other words what the ratio of masking the images from the dataset?
      supppose there are 8 classes and each class contains almost 1000-1200 images, then how many images i need to mask for model?

  • @biplugins9312
    @biplugins9312 3 роки тому

    In order to really follow along, it would help if I could use the same data. You have the code posted very nicely but the image data which is there hasn't been updated for many months. Can you point me to the data you used? Thanks.

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      Please read the description for this video you will find the link to the dataset.

    • @biplugins9312
      @biplugins9312 3 роки тому

      @@DigitalSreeni Thanks for your answer. It really helps to be able to ask questions.
      A while ago I ran into a problem that I couldn't run your code. It turns out that "old computers" can't run deep learning.
      I switched over to Google Colabs which runs on the cloud instead of my local machine. The easy stuff I can do on spyder on my local machine and then copy-paste the code into colabs.
      On video 204 I found the microchondia data, but wasn't quick enough to delete my request. I loaded a single tiff file with a stack of 165 slices for the training data and another stack for the mask data.
      For some reason I get 1980 256*256 patches whereas you got 1600 patches. Not a huge difference, but I still don't understand why.
      The training finally finished at 17m 23s. Is this reasonable? (This is my first attempt ever to run a serious program on colab, so maybe it is correct. My question really is: is this OK or should I be looking for problems?)
      I filled train_images[] and train_masks[] using my code to read the tiff stack and then let your code do the rest. Your part took 17m 23s.

    • @biplugins9312
      @biplugins9312 3 роки тому

      Just to let you know, I turned on the use of GPU inside collab and the time dropped to 20 seconds! Vive la difference!
      Even more important is that history = model.fit(X_train, y_train, was crashing on every run. After I turned on use of GPU, history no longer crashed. There are some other crashes which I am looking at now.

    • @jyothir07
      @jyothir07 3 роки тому

      from the tif stack use imagej . Save as-> Image sequence

  • @pallavisachdeva5194
    @pallavisachdeva5194 3 роки тому

    Very very very useful

  • @imranhosen240
    @imranhosen240 3 роки тому

    Really awesome ....

  • @sheetalpawar3623
    @sheetalpawar3623 2 роки тому

    Hello Sir, It was a wonderful experience to follow your tutorials and learn. Sir I am facing issues while predicting this model on an unknown image which is not a part of the dataset. Would you please suggest the code lines to predict on unknown images

    • @DigitalSreeni
      @DigitalSreeni  2 роки тому

      Can you please elaborate on what issues you are facing?

  • @jyothir07
    @jyothir07 3 роки тому

    Sir, Please provide the versions of tf and keras used for the codes.

  • @jaddyroot
    @jaddyroot 3 роки тому +2

    Colleagues, I've faced a many errors during compiling and training processes with images 120x120px, so I've discovered that the images size should be divisible by 32 (e.g. 32x32x3 or 128x128x3). Please note this one.

  • @mohghahramani
    @mohghahramani 3 роки тому +1

    Hi Sreeni, many thanks for your detailed explanation. May I ask how I can access the dataset in the form you use it in the code? The one that I download from gdrive has a very different structure

    • @haikalmss5619
      @haikalmss5619 Рік тому

      Hi. Yes I do think I face the same problem as you. as I go into the file, Desktop\sandstone_data_for_ML\full_labels_for_deep_learning\128_patches\images. The image is only 1 image but when I view it using FiJi . I can scroll it which is 1600 images. but somehow on my code it shown an error which is ValueError: not enough values to unpack (expected 3, got 1). I do think it is from that case. I hope Sreeni will notice us.

    • @haikalmss5619
      @haikalmss5619 Рік тому

      Hi It seems that I have found the solution on this.
      It seems like we have a multi-page TIFF (Tagged Image File Format) file where each page corresponds to an individual image. When you open it in an image viewer like Fiji/ImageJ, it correctly recognizes the 1600 pages as separate images,
      To separate this images I use a library like Python's Pillow (PIL) to split the multi-page TIFF into separate images. Here's an example of how I done it :
      from PIL import Image
      # Open the multi-page TIFF image
      with Image.open('Desktop\\Multiclass U-Net\\128_patches\\images\\images128.tif') as img:
      # Loop through each page (image) in the TIFF
      for i in range(img.n_frames):
      img.seek(i)
      # Process each image here
      # For example, you can save it as a separate file
      img.save(f'Desktop\\Multiclass U-Net\\128_patches\\images\\img{i}.tif')

  • @lequangminh1097
    @lequangminh1097 3 роки тому

    How can we apply this for large image size, about 4000x3000 pixels

  • @ummugaza
    @ummugaza 3 роки тому

    Thank you... 😊

  • @kavithashagadevan7698
    @kavithashagadevan7698 3 роки тому

    This is really great. Your work has helped me a lot in my research in Electron Microscopy images. I am unable to find the codes used in this video in the above mentioned github link. Can you very kindly upload the code please. Thank you very much

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      The code gets uploaded roughly about 8 hrs after video upload. Please check back in a couple of hours.

    • @kavithashagadevan7698
      @kavithashagadevan7698 3 роки тому

      @@DigitalSreeni Thank you. You are a lifesaver

  • @shouvikdey7078
    @shouvikdey7078 2 роки тому

    Can you make a video of semantic segmentation with mask rcnn. I must confess, your code helps me a lot in learning and practicing ML. Thank you.

  • @caiyu538
    @caiyu538 3 роки тому

    what is the difference among each preprocess VGG, ResNet and Inception? Could you describe it in details in one video tutorial?

  • @AgriculturaDigital
    @AgriculturaDigital 3 роки тому +1

    Your videos is so good. Very informative. I like to use Spyder as well. But I really like to use #%%. So you can split your code in cells, and can use use shift + enter or ctrl + enter in each cell to run it. I hope you going to like it. It’s look like the best world of spyder and jupyter.

  • @chengyouyue5295
    @chengyouyue5295 2 роки тому

    Excellent

  • @kop0164
    @kop0164 Рік тому

    can anyone provide me the source code that presented in the video? please?

    • @DigitalSreeni
      @DigitalSreeni  Рік тому

      Please read the description to find link to my Github page with the code.

    • @kop0164
      @kop0164 Рік тому

      @@DigitalSreeni not that code

  • @TheedonCritic
    @TheedonCritic 2 роки тому

    Nice tutorial, can these models be ensembled and how, please?

  • @MyMaha1989
    @MyMaha1989 Рік тому

    Sir , can you make a video on RegNet for image classification

  • @nabilaelloumi4233
    @nabilaelloumi4233 2 роки тому

    i have a dataset from dicom pacs for lung cancer i want to segment them with unet architecture

  • @tapansharma460
    @tapansharma460 3 роки тому

    best 1 sir

  • @srpriya1963
    @srpriya1963 3 роки тому

    Nice explanation sir. I have some queries. I want to predict one particular disease in earlier stage. May I combine deep learning(preprocessing), Yolo( for real time object detection), Unet( for segmentation) and CNN (for classification) in single project. Is it possible. Please help me sir. i expect your valuable suggestion.

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      Technically you can write code to create a workflow that you mentioned in your comment. You cannot combine all into a single model as each task comes with unique requirements, for example loss functions. But you can create classes for each task and write code to manage the information flow. I never had to work on such scenario and I cannot think of any other project that involved all these steps.

    • @srpriya1963
      @srpriya1963 3 роки тому

      @@DigitalSreeni thank you for your quick response

    • @srpriya1963
      @srpriya1963 3 роки тому

      ​@@DigitalSreeni i had found out the best result separately using those model. but finally i want to classify the disease depend upon previous performance. so I asked sir. Give me any suggestion about how to conclude this.

  • @mdhafizurrahman5386
    @mdhafizurrahman5386 3 роки тому

    Hi, I an trying with my own RGB dataset. Annotated them with VIA. There are two classes Copper and Belmouth. I already used python to get maked pictures. So each images generated multiple masked pictures. But I am not sure how to put them together and encode the labels using label encoder. I used these codes from your another tutorial to put masks together:
    train_masks = np.zeros((len(train_ids), 244, 244, 1), dtype=np.bool)
    mask = np.zeros((244, 244, 1), dtype=np.bool)
    for n, id_ in tqdm(enumerate(train_ids), total=len(train_ids)):
    for mask_file in next(os.walk(path + '/' + id_ +'/masks/'))[2]:
    mask_ = imread(path + '/' + id_ +'/masks/' + mask_file, 0)
    mask_ = np.expand_dims(resize(mask_, (244, 244), mode='constant',
    preserve_range=True), axis=-1)
    mask = np.maximum(mask, mask_)
    train_masks[n] = mask
    #Convert list to array for machine learning processing
    train_masks = np.array(train_masks)
    train_masks.shape
    After that, I am doing Label encoding which is giving me an array array([0, 1]),
    Should I not get an array of O, 1, 2, because of two main classes + Background?
    Is my process of mask image generation correct?

  • @aartinarendrabokade9791
    @aartinarendrabokade9791 2 роки тому

    Hello. Can you please create one tutorial for segmenting breast ultrasound/mammography images with all these architectures as backbones

    • @DigitalSreeni
      @DigitalSreeni  2 роки тому

      Can you suggest a public dataset that I can use for this purpose?

  • @tilkesh
    @tilkesh 2 роки тому

    Thank you

  • @nabilaelloumi4233
    @nabilaelloumi4233 2 роки тому

    i try apeer but doesn't work

    • @DigitalSreeni
      @DigitalSreeni  2 роки тому

      That's unfortunate. It means your images cannot be segmented using U-net type of architecture. APEER uses U-net but modified using efficient net. So it should have worked if U-net is appropriate for your problem.

  • @biplugins9312
    @biplugins9312 3 роки тому

    i did in fact try what you suggested and it didn't work.
    After thinking about the problem, I have a guess as to what is going on.
    Some videos ago you talked about tensorflow 1.x. I tried to use 1.x and ran into all sorts of problems, none of which I understood.
    Finally I dug into the code of colab and found I could use the latest tensorflow if I changed the directory structure.
    When it would give me an error that it couldn't find some file, I would look and see where the file was actually located. for example my changes included
    from tensorflow.python.keras.utils.np_utils import normalize
    from tensorflow.python.keras.utils.data_utils import get_file
    For the normalize to work i needed to add np_utils and for get_file I needed to add data_utils.
    Without the additions, nothing would fly (presumably because the locations of the files had changed).
    With the directory changes things did work because I made the changes in your code, where the calling was taking place.
    So why does it fail for me for segmentation models? In my guess, the reason is that the call is being made inside segmentation models itself.
    My external changes had no effect. Why does it then work for you? My guess is that we are using different libraries. I have
    tensorflow 2.5.0
    keras 2.5.0
    My next question was: if I open a new notebook, will I get a fresh start, or will I be pulling information from past notebooks?
    I tried opening a new notebook and for about 15 min, the basic entries were different. Then there a flash on the screen and I saw the entries on the new notebook were now the same as existing notebooks. This means I don't get a fresh start. Colab says this:
    How can I reset the virtual machine(s) my code runs on, and why is this sometimes unavailable?
    Selecting Runtime > Factory reset runtime to return all managed virtual machines assigned to you to their original state. This can be helpful in cases where a virtual machine has become unhealthy e.g. due to accidental overwrite of system files, or installation of incompatible software. Colab limits how often this can be done to prevent undue resource consumption. If an attempt fails, please try again later.
    If you have a different tensorflow, perhaps I need a factory reset and then I can go back and try to install tensorflow 1.x?? Since this is a big change, I wanted to ask your opinion before killing my current notebooks. (Which version of tensorflow are you using, so that we both use the same version?)
    Edit: I saw your video on colab editing. I didn't realize it was so easy to modify system files! I went inside the calling routine to get_files and added ".data_utils" to the path. To my total amazement, it automatically saved the changes. I had to restart the kernel and then run everything again to get the variables back again. Part of the code is to pip install segmentation-models, so I thought it would override the change I just made, but it didn't! Now it safely passes the point where it was falling over all the time.
    It might be that I may have to make the change again tomorrow if google erases what pip imports?? We'll see, but at least I can make progress again.

  • @diegostaubfelipe4310
    @diegostaubfelipe4310 3 роки тому

    I had this problem in colab: module 'keras.utils' has no attribute 'get_file' using segmentation_models. I solved this sm.set_framework('tf.keras')

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      Thanks for the suggestion. You can also follow the process I've used to fix this error: ua-cam.com/video/syJZxDtLujs/v-deo.html

  • @ikhlasahmad5505
    @ikhlasahmad5505 3 роки тому

    Great help sir but if u provide some help on how to use multi channel tiff masks belonging to different classes for semantic segmentation it will be so nice of you.. Thanks :)

  • @addisbelayneh1585
    @addisbelayneh1585 3 роки тому

    thank you sir

  • @islamzohier4032
    @islamzohier4032 3 роки тому

    hello Thanks for you comprehensive videos it is really helpful.
    i want to as what if my data classes were labeled on 3 RGB colors not in gray scale. because no i am having shape issue .

    • @islamzohier4032
      @islamzohier4032 3 роки тому

      or if there any way in python to reprocess the three classes images in to grayscale classes from 0 to 2

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому +1

      I've covered U-net under many scenarios, greyscale, RGB and even multichannel. Please go through the videos to find the one that matches your needs.

  • @XinhLe
    @XinhLe 3 роки тому

    thank u

  • @biplugins9312
    @biplugins9312 3 роки тому

    Because of my "old computer" I can't use spyder for serious work (no GPU), so I use colab.
    In order to get colab to work, I needed to make 2 changes: from keras.utils import normalize to from keras.utils.np_utils import normalize
    and "from keras.utils import to_categorical" to "from keras.utils.np_utils import to_categorical".
    It is crashing on the first model: ---->
    /usr/local/lib/python3.7/dist-packages/sklearn/preprocessing/_label.py:251: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
    y = column_or_1d(y, warn=True)
    Class values in the dataset are ... [0 1 2 3]
    ---------------------------------------------------------------------------
    AttributeError Traceback (most recent call last)
    in ()
    117
    118 # define model
    --> 119 model1 = sm.Unet(BACKBONE1, encoder_weights='imagenet', classes=n_classes, activation=activation)
    120
    121 # compile keras model with defined optimozer, loss and metrics
    6 frames
    /usr/local/lib/python3.7/dist-packages/classification_models/weights.py in load_model_weights(model, model_name, dataset, classes, include_top, **kwargs)
    23 ' as true, `classes` should be {}'.format(weights['classes']))
    24
    ---> 25 weights_path = keras_utils.get_file(
    26 weights['name'],
    27 weights['url'],
    It is complaining about a column vector, but I think this may be a red herring, not the real problem. I looked very carefully at what you did in the video, but it doesn't seem to like the unique values 0,1,2,3 in y. My guess is it some clash between versions of different parts of libraries.
    A couple of videos back I did try using tensorflow 1.x but that was a disaster in that keras no longer was compatible. By adding the np_utils, I finally got the latest tensorflow to work. Now I may be paying the price. I may just have to push on looking at your videos without being able to follow along on my own. Unless of course you have a suggestion on how to solve the problem.
    Thanks for your great work. I really did manage to learn some new things.

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      This is the pain that comes with working on ill-maintained external libraries. In this case, the fix is simple. You need to find the weights.py file and change keras_utils.get_file to tensorflow.keras.utils.get_file. In fact I would do this... import the library first and then...
      from tensorflow.keras.utils import get_file
      weights_path = get_file(.......

  • @biplugins9312
    @biplugins9312 3 роки тому

    This is getting confusing with many replies, so I'll start again. I didn't include 1 crucial line in the error
    /usr/local/lib/python3.7/dist-packages/sklearn/preprocessing/_label.py:251: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
    y = column_or_1d(y, warn=True)
    Class values in the dataset are ... [0 1 2 3]
    ---------------------------------------------------------------------------
    AttributeError Traceback (most recent call last)
    in ()
    117
    118 # define model
    --> 119 model1 = sm.Unet(BACKBONE1, encoder_weights='imagenet', classes=n_classes, activation=activation)
    120
    121 # compile keras model with defined optimozer, loss and metrics
    6 frames
    /usr/local/lib/python3.7/dist-packages/classification_models/weights.py in load_model_weights(model, model_name, dataset, classes, include_top, **kwargs)
    23 ' as true, `classes` should be {}'.format(weights['classes']))
    24
    ---> 25 weights_path = keras_utils.get_file(
    26 weights['name'],
    27 weights['url'],
    AttributeError: module 'keras.utils' has no attribute 'get_file'
    It is true that keras has no attribute 'get_file', but keras.utils.data_utils does have get_file
    Following your suggestion I did
    from tensorflow.python.keras.utils.data_utils import get_file
    However the call to keras.utils.get_file is internal to their function, so my from.... import is correctly ignored.
    The other things are part of your code, so i have some control over what is happening with your code only.
    Catch-22 ??

    • @DigitalSreeni
      @DigitalSreeni  3 роки тому

      Please try what I suggested in a separate line, execute the line and see if it works, only then apply to the weights.py file. You seem to be following multiple pieces of advise and mixing the suggestions. I literally typed this in colab and it worked: from tensorflow.keras.utils import get_file

    • @saadiaazeroual8857
      @saadiaazeroual8857 3 роки тому

      please did you solve this error :
      /usr/local/lib/python3.7/dist-packages/sklearn/preprocessing/_label.py:251: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
      y = column_or_1d(y, warn=True) ??

  • @dimane7631
    @dimane7631 2 роки тому

    prerequisite : ua-cam.com/video/J_XSd_u_Yew/v-deo.html