Hello I'm facing a problem and I hope you can help me with it I'm trying to add values after making the flatten but these values must be based on each image's pixels so for each image in the training set im trying to add some values after the flatten to feed it to the fully connected network Thanks in advanced
t = t.reshape(t.shape[0],-1), works regardless of the number of color channels or even output channels for that matter. It's great fun - watching your videos. They are both stimulating, cognizant and informative - a rare combination to find in tutoring. Cheers mate! Keep up the good work.
One of the best channels!These are videos with high quality content, production animation verbal and visual explanations well put together! Thanks for doing this!
t.reshape(t.size()[0], t.size()[1] * t.size()[2] * t.size()[3]) Thanks for this awesome series of tutorials and blogposts. I benefited a lot from this. Keep up the good work :)
This would probably seem stupid but I'm posting it since it surprisingly worked. t.reshape(int(torch.tensor(t.shape)[0]),int(torch.tensor(t.shape)[1])*int(torch.tensor(t.shape)[2])*int(torch.tensor(t.shape)[3]))
Thank you for another high quality video :) Here is one simple solution: t.view(t.shape[0], -1) get the first (or any) dimension with .shape, and use -1 to flatten the rest
i understood that we combine all the layers from the image matrix to perform flattening operation , but i did not understand why are we doing this , the end result is we pass the values to the soft-max classifier which performs a multi class classification , thanks in advance
i am a (couple) year late, but ... t.reshape(3, -1) does the work for the example of the video or t.reshape(t.shape[0], -1) does the work for every number of batches
For RGB images with 3 color channels, shouldn't you use 'start_dim=2' instead of 'start_dim-1' so that each color channel of the image is flattened individually? Or do you just flatten all of them at once? Thanks!
There's a frog on a log in a hole at the bottom of the sea. The frog is number width of the image, the log is the height, the hole is the channel and the depth is the index of the image in the data-set. Easy.
Hey Ziqiang - Not sure which ones do and which ones don't. We are likely moving in that direction though. Here are a couple of resources that look interesting: blogs.nvidia.com/blog/2016/08/30/eye-tracking-deep-learning/ arxiv.org/abs/1806.10890 stanford.edu/class/ee267/Spring2018/report_griffin_ramirez.pdf
Hey Pakhomov - You are right! The documentation is missing for the flatten function. The flatten function is new. It was added in PyTorch 4.0.1. This is the commit where the function was added: github.com/pytorch/pytorch/pull/8578 You can also see it here (search for flatten): github.com/pytorch/pytorch/releases There was an issue opened for the documentation issue. Check it out here: github.com/pytorch/pytorch/issues/9876 The issue was already fixed and merged into the master code branch. You can see it here: pytorch.org/docs/master/ To get to that link from the official docs page, use the version selector at the top, and choose master. Good eye on that one. Great attention to detail! Hope this info helps!
i've been watching these on headphones. untill this video.. Sudenly earthquake.. evedently my headphones have no base responce :p But my speakers can shake the hosue..
Check out the below video and blog from our Deep Learning Fundamentals course. It is all about batches in deep learning. Let me know if it helps clarify! deeplizard.com/learn/video/U4WB9p6ODjM
Damn, every time I see a video of yours it makes me feel bad as in why haven't I subscribed to your Patreon account yet. Such is the quality and effort made in every single frame in every video. Feels like stealing.
Hey Raphael - Sorry about that. It's pretty much in for this whole series. I'll consider removing it in future videos. Which brand of headphones are you using? In my headphones it sounds very low and soothing.
Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/mFAIBMbACMA
U should have 1 million subscriber
Hello
I'm facing a problem and I hope you can help me with it
I'm trying to add values after making the flatten but these values must be based on each image's pixels
so for each image in the training set im trying to add some values after the flatten to feed it to the fully connected network
Thanks in advanced
Watching the whole ad as well as clicking the ad to support you guys❤️
t = t.reshape(t.shape[0],-1), works regardless of the number of color channels or even output channels for that matter. It's great fun - watching your videos. They are both stimulating, cognizant and informative - a rare combination to find in tutoring. Cheers mate! Keep up the good work.
Hey Sourajit - Thank you! I really appreciate that, and I'm glad you have enjoyed the playlist!!
One of the best channels!These are videos with high quality content, production animation verbal and visual explanations well put together! Thanks for doing this!
Really cool animations along with their parallel explanations, nice job
Your amazing work is helping me tremendously, thank you so so much!
I don't expected This was interactive! Awesome!
Why don't you have more subs. This is gold
Excellent work guys!
t.reshape(t.size()[0], t.size()[1] * t.size()[2] * t.size()[3])
Thanks for this awesome series of tutorials and blogposts. I benefited a lot from this. Keep up the good work :)
This would probably seem stupid but I'm posting it since it surprisingly worked.
t.reshape(int(torch.tensor(t.shape)[0]),int(torch.tensor(t.shape)[1])*int(torch.tensor(t.shape)[2])*int(torch.tensor(t.shape)[3]))
Thanks for the great videos !
Thank you for these Videos. They really help a lot for beginners !
Hey Prathmesh - You are welcome!
Thank you for another high quality video :)
Here is one simple solution:
t.view(t.shape[0], -1)
get the first (or any) dimension with .shape, and use -1 to flatten the rest
Thanks Ulm - You're welcome! Nice solution. I like how this solution generalizes to batches of any size. 🤖
i understood that we combine all the layers from the image matrix to perform flattening operation , but i did not understand why are we doing this , the end result is we pass the values to the soft-max classifier which performs a multi class classification , thanks in advance
very good video!
t.reshape(t.shape[0],-1)
I like the Ted talk in the end
wonderful contents, plz keep up the good work, I just happen to have a project on pytorch this summer, would be immensely helpful!
Thank you Ziqiang! Really appreciate all your comments!
It's an awesome series..
Hey Md Hasan - Thank you!
i am a (couple) year late, but ...
t.reshape(3, -1) does the work for the example of the video
or
t.reshape(t.shape[0], -1) does the work for every number of batches
wonderful content!!!!
Great Video deserves more views 73 likes and no dislike speaks for itself
Thank you Akash!
For RGB images with 3 color channels, shouldn't you use 'start_dim=2' instead of 'start_dim-1' so that each color channel of the image is flattened individually? Or do you just flatten all of them at once? Thanks!
Mindblow
There's a frog on a log in a hole at the bottom of the sea. The frog is number width of the image, the log is the height, the hole is the channel and the depth is the index of the image in the data-set. Easy.
I had a lot of fun reading this. 🤣🤣🤣 It's going over my head though, so if it's not completely random, I'd love to hear more.
if we have 3 color channels rgb ,when we perform the flatten operation will all the 3 channels combine as one tensor , thanks in advance !!!
Yes. Here is some experimental code:
r = torch.ones(1,2,2)
g = torch.ones(1,2,2) + 1
b = torch.ones(1,2,2) + 2
img = torch.cat(
(r,g,b)
,dim=0
)
img.shape
#output: torch.Size([3, 2, 2])
img
#output:
tensor([[[1., 1.],
[1., 1.]],
[[2., 2.],
[2., 2.]],
[[3., 3.],
[3., 3.]]])
This code gives us an rgb image that has height and width of 2.
Now, flatten like this:
img.flatten(start_dim=0)
#output: tensor([1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.])
t.reshape(3, -1).squeeze(dim=1)
t.reshape(3,16).squeeze(dim=1)
t.reshape(-1,16) or
t.reshape(-1,t.flatten(start_dim=1).shape[1])
Hey Morten - You've got it. They work! Thanks for contributing these! 🚀
sound effects are from bisqwit c++ simd videos?
Many creators use the same sound effects libraries 😄
@@deeplizard oh I didn't know that. Great content BTW. Love it, I'm binge-watching.
does modern camera system's eye tracking system use CNN?
Hey Ziqiang - Not sure which ones do and which ones don't. We are likely moving in that direction though. Here are a couple of resources that look interesting:
blogs.nvidia.com/blog/2016/08/30/eye-tracking-deep-learning/
arxiv.org/abs/1806.10890
stanford.edu/class/ee267/Spring2018/report_griffin_ramirez.pdf
really good video, why only 467 views?
Thanks YUAN! 467 is pretty weak. I know. Hopefully it's only temporary.
Thank your for your comment. I really appreciate that! :)
deeplizard I am waiting for more pytorch videos🤣
You are the god of AI
img.shape
#output [a,b,c,d]
Img=img.reshape(img.shape[0],-1)
"PyTorch’s built-in flatten() method." Hey there, r u sure there is FLATTEN() in Pytorch? Could not find in the official Pytorch documentation.
Hey Pakhomov - You are right! The documentation is missing for the flatten function.
The flatten function is new. It was added in PyTorch 4.0.1.
This is the commit where the function was added: github.com/pytorch/pytorch/pull/8578
You can also see it here (search for flatten): github.com/pytorch/pytorch/releases
There was an issue opened for the documentation issue. Check it out here: github.com/pytorch/pytorch/issues/9876
The issue was already fixed and merged into the master code branch. You can see it here: pytorch.org/docs/master/
To get to that link from the official docs page, use the version selector at the top, and choose master.
Good eye on that one. Great attention to detail! Hope this info helps!
t.reshape(3, -1) make the trick
Hey Junior - Excellent! I like. 🤖
@Junior De Jesus nice, now i also understood what exactly the -1 was doing in the last video in the def flatten(t) - function. :D
The -1 parameter means that the torch or numpy will figure out the unknown dimension by looking at the 'length of the array and remaining dimensions'
It's not a good answer, I think it should be t.reshape(t.shape[0], -1).
@@baiyuzhao4281 Was it specified that the answer should be general?
t = torch.stack((t1,t2,t3)).reshape(3,1,16).squeeze()
t.flatten().reshape(3, 16).squeeze() ( sorry for being late to the party :P)
i've been watching these on headphones. untill this video..
Sudenly earthquake.. evedently my headphones have no base responce :p But my speakers can shake the hosue..
t2.reshape(t2.shape[0], np.prod(t2.shape[1:]))
t.reshape(t.flutten(start_dim=1).shap)
;)
t.reshape(3,-1)
t.reshape(3,16)
t = t.reshape(dim, t.shape[0], 1, -1)
t = t.squeeze()
print(t.reshape(3, -1))
t.reshape(-1, 16)
what is this "batch"? It's confusing me for a long time
Check out the below video and blog from our Deep Learning Fundamentals course. It is all about batches in deep learning. Let me know if it helps clarify!
deeplizard.com/learn/video/U4WB9p6ODjM
b.reshape(3,-1)
t.reshape(t.shape[0],int(t.numel()/t.shape[0])) this will work as well
Damn, every time I see a video of yours it makes me feel bad as in why haven't I subscribed to your Patreon account yet. Such is the quality and effort made in every single frame in every video. Feels like stealing.
t.reshape(1,3,16)
T.reshape(1,-1)[1]
t.reshape(3,1,1,-1).squeeze()
Alghouth it's late, I am gonna post my answer.
t.reshape( t.shape[0], -1 )
t.reshape(3,1,1,-1)
t = t.reshape(len(t), -1)
t.reshape(1,-1)[1]
reshaped_new_tensor = tensor_reshaped.reshape((1,-1))
t.reshape(1,-1)[0]
t.reshape(len(t[0]),-1)
t.reshape(1,-1)[0,:]
Can you please not include that loud bass sound when you type code? It's really hurts my ears when I watch you in my headphones :(
Hey Raphael - Sorry about that. It's pretty much in for this whole series. I'll consider removing it in future videos. Which brand of headphones are you using? In my headphones it sounds very low and soothing.
I've noticed the high bass in both my headphones and in my car. Your headphones may be lowering the bass.
t=t.reshape(t.shape[0],(torch.numel(t)//t.shape[0])).squeeze()
t.reshape(3, -1)
t.reshape(3,16)
t.reshape(t.shape[0],-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(3,-1)
t.reshape(2,-1)
t.reshape(t.shape[0], -1)
t.reshape(t.shape[0],-1)
t.reshape(t.shape[0], -1)
t.reshape(t.shape[0], -1)