One of the distinctive factor your videos have compared with others that you have included some short talks at the end. It makes me more motivated to see your videos!
Yeah that´s great, this and a bunch of other things making the videos very enjoyable - I realy like learning this way, imagine if more content would be as great as this, ouer collective intelligence would explode because learning would be so much nicer
Could be the best ever tutorial I've ever seen on UA-cam. You made it to keep it simple, clear but deep enough. There is no doubt that the teacher is an expert in Pytorch.
I'm loving this series! Thank you so much for putting it together. A particular comment on this one. PyTorch is mostly an abstraction layer for vectorization (SIMD on CPU and CUDA on GPU) exactly to improve performance. Saying that you should not worry about performance is like saying you should not use PyTorch and go back to the python for loops. I agree that copying is less bug prone but PyTorch usage is exactly to get performance out of modern devices.
It's great to put video shortcuts into tutorial video, cause it helps to bring person's attention back if the one is about to sleep. Also, it helps to memorize better, i.e. if you put Soumith's video while explaining tensors, person watching your video may create association of tensors with Soumith's words, and that is better memorized. Keep doing such wonderful tutorials, thank you! So much effort done to create each of these tutorials, appreciate it
Ah I'm so worried right now, I'm binge watching your playlist and my rate of watching is way way faster than your rate of uploading new content! :🤣🤣Really appreciate what you're doing. Fab stuff!
Hey Ahmad - Excellent! That feeling when we become conscious of the fact that the end is coming sooner than we want! I know that feeling! 🤖 haha! Thank you! Really appreciate that!
This is one of the best tutorials I have ever seen. But the sad fact is that it is no longer free. I have completed lecture 19. Then I realized that the rest of the videos were removed from the UA-cam playlist. Heart broken 💔
{ "question": "What is the go to tensor creation function for every-day use?", "choices": [ "torch.tensor()", "torch.Tensor()", "torch.as_tensor()", "torch.from_numpy()" ], "answer": "torch.tensor()", "creator": "Hivemind", "creationDate": "2019-08-26T22:09:36.603Z" }
Hey Pranav - Thank you for making a question! I just added it to the site. You may need to clear your cache if it doesn't update. Check here: deeplizard.com/learn/video/AglLTlms7HU
I have a couple of doubts 1.In the blog you have mentioned that as_tensor will give performance improvement when there are multiple back and forth operations between numpy.ndarray objects and tensor objects. Can you please give an example where we can encounter such cases ? 2. as_tensor is preferred because it can handle any python array like objects . Can you please give examples of "array like objects " ?
Hey Ayush - The key to your first doubt is to realize when it's good to share memory. There are no typical scenarios that come to mind. For your second doubt, have a look at this link: docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays I added this to the blog as well.
Hey Sebastian - Your question is now live on the site! You may need to clear your cache to see it. Thank you for taking the time to make it. You are awesome! Here: deeplizard.com/learn/video/AglLTlms7HU
Hey Sebastian - Your question is now live on the site! You may need to clear your cache to see it. Thank you for taking the time to make it. You are awesome! Here: deeplizard.com/learn/video/AglLTlms7HU
In the blog post, I think there is a typo error if I correctly understand "This sharing just means that the actual data in memory exists in a single place. As a result, any changes that occur in the underlying data will be reflected in both objects, the torch.Tensor and the numpy.ndarray." The objects should be torch.as_tensor() and torch.from_numpy() .... please correct my understand if I am wrong. thanks
The Objects resulting from a call to 'as_tensor' or 'from_numpy' are of type torch.Tensor. So no error there. I'd suggest watching coding tutorials on the side.
The functions (.tensor, .as_tensor, .from_numpy) that in your video return data types don't return data types for me on data types that are int32. Is this something that has changed since your video?
same for me, and i remember doing it like a year ago and it was working so it probably has something to do with either the PyTorch version or the Python version
{ "question": "What is the major difference between tensor creating with torch.tensor(data) vs tensor created using torch.as_tensor(data), given that both methods using the same data?", "choices": [ "torch.tensor() makes a copy of data where torch.as_tensor() shared the same memory space with data", "torch.tensor() is a factory method where torch.as_tensor() is not", "torch.tensor() defaults to float64 where torch.as_tensor() uses defaults to integer", "no difference, they are the same." ], "answer": "torch.tensor() makes a copy of data where torch.as_tensor() shared the same memory space with data", "creator": "jlam", "creationDate": "2019-10-23T02:05:42.493Z" }
I was trying to work on VS code to debug a pytorch DNN implementation. However, 'torch' library is not getting detected in VS code. It throws many compile time errors like Error1 - "E1101:Module 'torch' has no 'randn' member" , Error2 - "E1101:Module 'torch' has no 'mean' member" and so on. Could you please help me understand and fix this issue? While searching online, i found only this link - discuss.pytorch.org/t/how-to-use-visual-studio-code-for-pytorch/13643 in the pytorch forum. This is exactly the issue i am facing. I did not understand what exactly the solution was in the forum. So, I will look forward to your inputs. Thanks!
Hey Rohit - I'm not certain about this issue. To implement the solution on the forum do this: 1. Open VS code. 2. Go to: File > Preferences > Settings 3. Type this into the "search settings" box: python.linting.pylintargs 4. Click: "Edit in settings.json" 5. In the panel on the left, click the "edit" icon. 6. Choose: "Copy to settings" 7. In the panel on the right, add this to the empty list: "--extension-pkg-whitelist=torch" The short story is to add that value to a list in the settings.json file. Here is a reference for updating the IDE settings: code.visualstudio.com/docs/getstarted/settings Hope this helps. Let me know if you get it working.
Adding this in the settings.json will disable linting: "python.linting.pylintEnabled": false This one is suggesting an alternative as well: github.com/pytorch/pytorch/issues/1942 Here is the VS code docs on pylint: code.visualstudio.com/docs/python/linting#_pylint
8:21 torch.as_tensor() accepts also a list, but torch.from_numpy() does not. But in the video at 9:04 and also at deeplizard_com/learn/video/AglLTlms7HU you state that: "The memory sharing of as_tensor() doesn’t work with built-in Python data structures like lists." - I think you mean that from_numpy() doesn't work with it, right?
Hey nofreewill - Thanks for following up with your concern. In this case, the information is correct. For reference, have a look at the PyTorch documentation here: pytorch.org/docs/stable/torch.html#torch.as_tensor Currently (v1.3.1), it states as follows: "Convert the data into a torch.Tensor. If the data is already a Tensor with the same dtype and device, no copy will be performed, otherwise a new Tensor will be returned with computational graph retained if data Tensor has requires_grad=True. Similarly, if the data is an ndarray of the corresponding dtype and the device is the cpu, no copy will be performed." The method does indeed accept Python lists. However, it will create a copy of the data inside the list. That's what is meant by the "memory sharing part". Hope this helps. Like your handle btw.
8:21 is showing which methods share data/memory. These two: 1) torch.as_tensor() 2) torch.from_numpy() 9:04 discusses some caveats regarding the capability of data/memory being stored. The key take-away is that the methods share data when they are capable. Let me know if this makes sense.
import torch import numpy as np print(torch.__version__) 0.3.1.post2 data = np.array([1,2,3]) t1 = torch.as_tensor(data) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ----> 1 t1 = torch.as_tensor(data) AttributeError: module 'torch' has no attribute 'as_tensor' torch.tensor(np.array([1,2,3])) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 torch.tensor(np.array([1,2,3])) TypeError: 'module' object is not callable Any thoughts, so should i move to the older versions.
Actually I have a venv for fastai which I set up two months back. Looks like the version is old. Any thoughts as how to update to the new version of torch and fastai in the same venv. Enough though is on a GTX1080 8GB, I will be mostly using GCP for the new fastai course starting today. Thanks, Raghu
Awesome! Enjoy the fast.ai course! It looks like the as_tensor call was indeed introduced in 4.0.1. Link: github.com/pytorch/pytorch/pull/7109 Also, it appears that the fast.ai library doesn't support 4.0. In the environment.yml file, you can see this dependency: - pytorch
Check out the corresponding blog and other resources for this video at:
deeplizard.com/learn/video/AglLTlms7HU
I don't know why but your channel is underrated.
You have got the best tutorials for pytorch and your videos helped me to debug codes
One of the distinctive factor your videos have compared with others that you have included some short talks at the end. It makes me more motivated to see your videos!
Yeah that´s great, this and a bunch of other things making the videos very enjoyable - I realy like learning this way, imagine if more content would be as great as this, ouer collective intelligence would explode because learning would be so much nicer
I was feeling the same!!!
Could be the best ever tutorial I've ever seen on UA-cam. You made it to keep it simple, clear but deep enough. There is no doubt that the teacher is an expert in Pytorch.
loved the super deep bass while creator was typing
Awesome! Were you wearing headphones?
I'm not sure if that's the right question to be asking a bot. 🤖
deeplizard earphones actually. Eager to reach home and try that bass on my headphones.
LOL! I'm dying. 🤣🤣🤣
I'm loving this series! Thank you so much for putting it together.
A particular comment on this one. PyTorch is mostly an abstraction layer for vectorization (SIMD on CPU and CUDA on GPU) exactly to improve performance. Saying that you should not worry about performance is like saying you should not use PyTorch and go back to the python for loops. I agree that copying is less bug prone but PyTorch usage is exactly to get performance out of modern devices.
Sound track is on point
This is something that youtube tutors don't typically dive deep into. You are surpassing my expectation each video when you dive deep into specifics.
On the flip side, I appreciate your comments that point to specifics. Very much appreciated!
I'm simple man, I understand all. It's perfect.
It's great to put video shortcuts into tutorial video, cause it helps to bring person's attention back if the one is about to sleep. Also, it helps to memorize better, i.e. if you put Soumith's video while explaining tensors, person watching your video may create association of tensors with Soumith's words, and that is better memorized. Keep doing such wonderful tutorials, thank you! So much effort done to create each of these tutorials, appreciate it
I like how you made your videos fun and less boring
Love it. Thanks. I appreciate all the efforts you two are putting into producing these great videos.
I can't stop watching this awesome explanations
best tutorial series in AI ever
It's enjoyable to watch this video
Ah I'm so worried right now, I'm binge watching your playlist and my rate of watching is way way faster than your rate of uploading new content! :🤣🤣Really appreciate what you're doing. Fab stuff!
Hey Ahmad - Excellent! That feeling when we become conscious of the fact that the end is coming sooner than we want! I know that feeling! 🤖 haha! Thank you! Really appreciate that!
🙏🏼🙏🏼
OMG! I'm so enjoying this!
Thanks Ephrem! Glad to hear that!
This is really useful. Finer points!!
Indeed, definitely the finer points on this one!
I like to look into details. This video is great!
This is one of the best tutorials I have ever seen. But the sad fact is that it is no longer free. I have completed lecture 19. Then I realized that the rest of the videos were removed from the UA-cam playlist. Heart broken 💔
very good!
{
"question": "What is the go to tensor creation function for every-day use?",
"choices": [
"torch.tensor()",
"torch.Tensor()",
"torch.as_tensor()",
"torch.from_numpy()"
],
"answer": "torch.tensor()",
"creator": "Hivemind",
"creationDate": "2019-08-26T22:09:36.603Z"
}
Hey Pranav - Thank you for making a question! I just added it to the site.
You may need to clear your cache if it doesn't update.
Check here: deeplizard.com/learn/video/AglLTlms7HU
I have a couple of doubts
1.In the blog you have mentioned that as_tensor will give performance improvement when there are multiple back and forth operations between numpy.ndarray objects and tensor objects. Can you please give an example where we can encounter such cases ?
2. as_tensor is preferred because it can handle any python array like objects . Can you please give examples of "array like objects " ?
Hey Ayush - The key to your first doubt is to realize when it's good to share memory. There are no typical scenarios that come to mind.
For your second doubt, have a look at this link:
docs.scipy.org/doc/numpy/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays
I added this to the blog as well.
{
"question": "Which is a constructor?
Assume \"data = np.array([1,2,3])\".",
"choices": [
"torch.Tensor(data)",
"torch.tensor(data)",
"torch.as_tensor(data)",
"torch.from_numpy(data)"
],
"answer": "torch.Tensor(data)",
"creator": "Seabass",
"creationDate": "2020-03-09T02:44:47.958Z"
}
Hey Sebastian - Your question is now live on the site! You may need to clear your cache to see it. Thank you for taking the time to make it. You are awesome! Here: deeplizard.com/learn/video/AglLTlms7HU
Hey Sebastian - Your question is now live on the site! You may need to clear your cache to see it. Thank you for taking the time to make it. You are awesome! Here: deeplizard.com/learn/video/AglLTlms7HU
Awesome
What is that outro song called?
In the blog post, I think there is a typo error if I correctly understand
"This sharing just means that the actual data in memory exists in a single place. As a result, any changes that occur in the underlying data will be reflected in both objects, the torch.Tensor and the numpy.ndarray."
The objects should be torch.as_tensor() and torch.from_numpy() .... please correct my understand if I am wrong. thanks
The Objects resulting from a call to 'as_tensor' or 'from_numpy' are of type torch.Tensor. So no error there. I'd suggest watching coding tutorials on the side.
Came here to comment on the sound when you're typing in these basic functions.... it really is funny! :D
🤖
module 'torch' has no attribute 'as
_tensor'. I assume as_tensor method doesn't exist on pytorch 0.4.0
Hey Jean - The as_tensor call was indeed introduced in 4.0.1.
That 'hmmmmmmmm' at 5:20.
The functions (.tensor, .as_tensor, .from_numpy) that in your video return data types don't return data types for me on data types that are int32. Is this something that has changed since your video?
same for me, and i remember doing it like a year ago and it was working so it probably has something to do with either the PyTorch version or the Python version
{
"question": "What is the major difference between tensor creating with torch.tensor(data) vs tensor created using torch.as_tensor(data), given that both methods using the same data?",
"choices": [
"torch.tensor() makes a copy of data where torch.as_tensor() shared the same memory space with data",
"torch.tensor() is a factory method where torch.as_tensor() is not",
"torch.tensor() defaults to float64 where torch.as_tensor() uses defaults to integer",
"no difference, they are the same."
],
"answer": "torch.tensor() makes a copy of data where torch.as_tensor() shared the same memory space with data",
"creator": "jlam",
"creationDate": "2019-10-23T02:05:42.493Z"
}
Nice !
Hey jlam - Your question is live. Thank you for making it! 🙏
I was trying to work on VS code to debug a pytorch DNN implementation. However, 'torch' library is not getting detected in VS code. It throws many compile time errors like Error1 - "E1101:Module 'torch' has no 'randn' member" , Error2 - "E1101:Module 'torch' has no 'mean' member" and so on. Could you please help me understand and fix this issue? While searching online, i found only this link - discuss.pytorch.org/t/how-to-use-visual-studio-code-for-pytorch/13643 in the pytorch forum. This is exactly the issue i am facing. I did not understand what exactly the solution was in the forum. So, I will look forward to your inputs. Thanks!
Hey Rohit - I'm not certain about this issue.
To implement the solution on the forum do this:
1. Open VS code.
2. Go to: File > Preferences > Settings
3. Type this into the "search settings" box: python.linting.pylintargs
4. Click: "Edit in settings.json"
5. In the panel on the left, click the "edit" icon.
6. Choose: "Copy to settings"
7. In the panel on the right, add this to the empty list: "--extension-pkg-whitelist=torch"
The short story is to add that value to a list in the settings.json file.
Here is a reference for updating the IDE settings: code.visualstudio.com/docs/getstarted/settings
Hope this helps. Let me know if you get it working.
Thank you for the detailed instructions. I tried this option. But it dint work.
Adding this in the settings.json will disable linting:
"python.linting.pylintEnabled": false
This one is suggesting an alternative as well:
github.com/pytorch/pytorch/issues/1942
Here is the VS code docs on pylint:
code.visualstudio.com/docs/python/linting#_pylint
Update! as_tensor() works with python lists!
8:21
torch.as_tensor() accepts also a list, but torch.from_numpy() does not.
But in the video at 9:04 and also at deeplizard_com/learn/video/AglLTlms7HU you state that:
"The memory sharing of as_tensor() doesn’t work with built-in Python data structures like lists." - I think you mean that from_numpy() doesn't work with it, right?
Hey nofreewill - Thanks for following up with your concern. In this case, the information is correct. For reference, have a look at the PyTorch documentation here: pytorch.org/docs/stable/torch.html#torch.as_tensor
Currently (v1.3.1), it states as follows:
"Convert the data into a torch.Tensor. If the data is already a Tensor with the same dtype and device, no copy will be performed, otherwise a new Tensor will be returned with computational graph retained if data Tensor has requires_grad=True. Similarly, if the data is an ndarray of the corresponding dtype and the device is the cpu, no copy will be performed."
The method does indeed accept Python lists. However, it will create a copy of the data inside the list. That's what is meant by the "memory sharing part". Hope this helps. Like your handle btw.
@@deeplizard Oh, so the difference between 8:21 and 9:04 is that the latter concerns memory sharing but the former does not, right?
@@deeplizard what do you mean by "Like your handle btw."?
8:21 is showing which methods share data/memory. These two:
1) torch.as_tensor()
2) torch.from_numpy()
9:04 discusses some caveats regarding the capability of data/memory being stored.
The key take-away is that the methods share data when they are capable.
Let me know if this makes sense.
Your thought provoking username (handle is another term for username).
data = np.array([1,2,3])
data[0] = 0
data[1] = 0
data[2] = 0
t1 = torch.Tensor(data)
t2 = torch.tensor(data)
t3 = torch.as_tensor(data)
t4 = torch.from_numpy(data)
print(t1)
print(t2)
print(t3)
print(t4)
#tensor([0., 0., 0.])
#tensor([0, 0, 0])
#tensor([0, 0, 0])
#tensor([0, 0, 0])
I think PyTorch updates this issue
as in 2021 (not sure when it started before), all 4 methods basically change along with numpy `data`, so the video is (partially) out of date.
import torch
import numpy as np
print(torch.__version__)
0.3.1.post2
data = np.array([1,2,3])
t1 = torch.as_tensor(data)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
----> 1 t1 = torch.as_tensor(data)
AttributeError: module 'torch' has no attribute 'as_tensor'
torch.tensor(np.array([1,2,3]))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in ()
----> 1 torch.tensor(np.array([1,2,3]))
TypeError: 'module' object is not callable
Any thoughts, so should i move to the older versions.
Hey Raghavendra - I think you should go in the newer direction.
It looks like you are running: 0.3.1
The latest release is: 0.4.1
Actually I have a venv for fastai which I set up two months back. Looks like the version is old. Any thoughts as how to update to the new version of torch and fastai in the same venv. Enough though is on a GTX1080 8GB, I will be mostly using GCP for the new fastai course starting today. Thanks, Raghu
Awesome! Enjoy the fast.ai course!
It looks like the as_tensor call was indeed introduced in 4.0.1.
Link: github.com/pytorch/pytorch/pull/7109
Also, it appears that the fast.ai library doesn't support 4.0.
In the environment.yml file, you can see this dependency: - pytorch