C4W2L10 Data Augmentation

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 14

  • @aldonin21
    @aldonin21 2 місяці тому

    hey. if i perform a k-fold cross validation on the augmented dataset, and i want to have balanced classes in both train and test sets (by default my dataset is imbalanced) is it a smart approach to augment seperately train and test set per each fold? so that the modified copies of original images do not land both in train and test set at the same time, and we avoid data leakage?

  • @danafaiez3663
    @danafaiez3663 4 місяці тому

    What are some thoughts on whether to apply augmentation on test dataset? There is an opinion to never augment test dataset because we want. test data to represent production. But isn't part of the reason we augment data in train because we don't have enough data and we augment it in ways we think reflect different real scenarios? If that is the case, then don't we want to apply augmentation also on test data?

  • @saramessara4241
    @saramessara4241 3 роки тому

    how are RGB values supposed to be negative, or is it just an 8-bit signed representation?

    • @mueez.mp4
      @mueez.mp4 3 роки тому +1

      I think by the negative sign he meant to subtract that value from the current values.

    • @chawza8402
      @chawza8402 3 роки тому

      I think it reverese the value of the pixel range. let say a pixel value in R is 25 in the range of 0 to 255. if we reverse it, the value would be 255 - 25 = 230. same apply on every pixel in each color layer

  • @JisuKim-o3u
    @JisuKim-o3u 3 роки тому

    Thank you~~~~~

  • @MubashirullahD
    @MubashirullahD 4 роки тому

    I wish I had exact numbers. How much is too much Augmentation?

    • @quintonsa
      @quintonsa 4 роки тому +3

      Too much in the sense that it completely alters whatever is supposed to be represented in the image..So say a cat image where you shear it by an extreme amount such that the cat is no longer recognizable, so too much augmentation (shearing in this case)

  • @lene6641
    @lene6641 5 років тому +12

    "Data augmentation or how to fake your data" :)

  • @이시현학부생-소프트
    @이시현학부생-소프트 4 роки тому

    I have a question.. when we separate "conv"model and "softmax"model and save last conv layer output in disk and then use this as input of softmax model as last video(transfer learning), can't we use data augmentation? I've seen this information in "Deep learning with python" book but I can't understand why we can't use Data augmentation...

    • @MubashirullahD
      @MubashirullahD 4 роки тому +1

      Your question is not clear and I dont remember the context of this video.
      If I had to guess, in transfer learning we use the parameters trained in another model to initialize a new one. You can train this new model on the data you have and ofcourse you can augment it as well.
      Let me know what your question was if this doesn't answer it.

  • @sandipansarkar9211
    @sandipansarkar9211 4 роки тому

    nice explantion

  • @masakkali9996
    @masakkali9996 5 років тому

    Can you please share the code for color shifting?

    • @1995pipo
      @1995pipo 5 років тому +1

      The colors are usually 8-bit values (0-255) you just need to add those values (as in the video) to each channel