hey. if i perform a k-fold cross validation on the augmented dataset, and i want to have balanced classes in both train and test sets (by default my dataset is imbalanced) is it a smart approach to augment seperately train and test set per each fold? so that the modified copies of original images do not land both in train and test set at the same time, and we avoid data leakage?
What are some thoughts on whether to apply augmentation on test dataset? There is an opinion to never augment test dataset because we want. test data to represent production. But isn't part of the reason we augment data in train because we don't have enough data and we augment it in ways we think reflect different real scenarios? If that is the case, then don't we want to apply augmentation also on test data?
I think it reverese the value of the pixel range. let say a pixel value in R is 25 in the range of 0 to 255. if we reverse it, the value would be 255 - 25 = 230. same apply on every pixel in each color layer
Too much in the sense that it completely alters whatever is supposed to be represented in the image..So say a cat image where you shear it by an extreme amount such that the cat is no longer recognizable, so too much augmentation (shearing in this case)
I have a question.. when we separate "conv"model and "softmax"model and save last conv layer output in disk and then use this as input of softmax model as last video(transfer learning), can't we use data augmentation? I've seen this information in "Deep learning with python" book but I can't understand why we can't use Data augmentation...
Your question is not clear and I dont remember the context of this video. If I had to guess, in transfer learning we use the parameters trained in another model to initialize a new one. You can train this new model on the data you have and ofcourse you can augment it as well. Let me know what your question was if this doesn't answer it.
hey. if i perform a k-fold cross validation on the augmented dataset, and i want to have balanced classes in both train and test sets (by default my dataset is imbalanced) is it a smart approach to augment seperately train and test set per each fold? so that the modified copies of original images do not land both in train and test set at the same time, and we avoid data leakage?
What are some thoughts on whether to apply augmentation on test dataset? There is an opinion to never augment test dataset because we want. test data to represent production. But isn't part of the reason we augment data in train because we don't have enough data and we augment it in ways we think reflect different real scenarios? If that is the case, then don't we want to apply augmentation also on test data?
how are RGB values supposed to be negative, or is it just an 8-bit signed representation?
I think by the negative sign he meant to subtract that value from the current values.
I think it reverese the value of the pixel range. let say a pixel value in R is 25 in the range of 0 to 255. if we reverse it, the value would be 255 - 25 = 230. same apply on every pixel in each color layer
Thank you~~~~~
I wish I had exact numbers. How much is too much Augmentation?
Too much in the sense that it completely alters whatever is supposed to be represented in the image..So say a cat image where you shear it by an extreme amount such that the cat is no longer recognizable, so too much augmentation (shearing in this case)
"Data augmentation or how to fake your data" :)
I have a question.. when we separate "conv"model and "softmax"model and save last conv layer output in disk and then use this as input of softmax model as last video(transfer learning), can't we use data augmentation? I've seen this information in "Deep learning with python" book but I can't understand why we can't use Data augmentation...
Your question is not clear and I dont remember the context of this video.
If I had to guess, in transfer learning we use the parameters trained in another model to initialize a new one. You can train this new model on the data you have and ofcourse you can augment it as well.
Let me know what your question was if this doesn't answer it.
nice explantion
Can you please share the code for color shifting?
The colors are usually 8-bit values (0-255) you just need to add those values (as in the video) to each channel