Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (AI Paper Explained)
Вставка
- Опубліковано 5 чер 2024
- #transformer #nystromer #nystromformer
The Nyströmformer (or Nystromformer, Nyströmer, Nystromer), is a new drop-in replacement for approximating the Self-Attention matrix in Transformers with linear memory and time requirements. Most importantly, it uses the Nystrom-Method to subselect (or segment mean) queries and keys as so-called landmarks and uses those to reconstruct the inherently low-rank attention matrix. This is relevant for many areas of Machine Learning, especially Natural Language processing, where it enables longer sequences of text to be processed at once.
OUTLINE:
0:00 - Intro & Overview
2:30 - The Quadratic Memory Bottleneck in Self-Attention
7:20 - The Softmax Operation in Attention
11:15 - Nyström-Approximation
14:00 - Getting Around the Softmax Problem
18:05 - Intuition for Landmark Method
28:05 - Full Algorithm
30:20 - Theoretical Guarantees
35:55 - Avoiding the Large Attention Matrix
36:55 - Subsampling Keys vs Negative Sampling
43:15 - Experimental Results
47:00 - Conclusion & Comments
Paper: arxiv.org/abs/2102.03902
Code: github.com/mlpen/Nystromformer
Appendix: github.com/mlpen/Nystromforme...
LRA Results: / 1359301186734620675
Twitter lucidrains w/ author: / 1359597104075661312
Twitter lucidrains w/ _clashluke: / 1359483460851802115
Abstract:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences -- a topic being actively studied in the community. To address this limitation, we propose Nyströmformer -- a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than standard Transformer. Our code is at this https URL.
Authors: Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh
Links:
TabNine Code Completion (Referral): bit.ly/tabnine-yannick
UA-cam: / yannickilcher
Twitter: / ykilcher
Discord: / discord
BitChute: www.bitchute.com/channel/yann...
Minds: www.minds.com/ykilcher
Parler: parler.com/profile/YannicKilcher
LinkedIn: / yannic-kilcher-488534136
BiliBili: space.bilibili.com/1824646584
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n - Наука та технологія
If nothing else, the contribution to model naming is a clear increment to SOTA.
Nyströmer clearly is.
@@jonatan01i I will agree with that.
Hello Yannic,
Your comment about the order of operations is correct. It is one of those things where you set out to check how poorly it performs and find out that it could work empirically (at least in limited settings). The lemma is not practically useful but merely evaluates the setting that if/when everything is idealized, the results/procedure does not lead to nonsensical conclusions. The choice of F early on in the paper was to avoid a conflict with D (D and d were both used) and E (ones matrix).
What about the A^+ comment, was that actually a typo in the paper? ua-cam.com/video/m-zrcmRd7E4/v-deo.html
@@tchlux Yes, that is a typo. We somehow left out the pseudo inverse sign.
@@tchlux This is a typo. We will update it. Thanks for the catch.
I have seen this exact type of lemma in many discussions about approximations. It did not seem out of place to me. It's nice to know that in the limit your approximation will agree with the ground truth which is certainly not the case in all approximation methods.
Thanks for making this great video. Nice catch for the typo. We will update the draft soon.
0:42 Nyan-storm-former!
.
3:30 Time for my weekly Transformer explanation :)
.
27:00 That was a really sweet and easy to understand explanation.
.
35:00 I wonder if we can have a DNN just predict a landmark tensor..
Incredible breakdown, subbed!
Very nice explaination sir, Thank you!!
Lol, unexpectedly mentioned 😅 thanks for the video!
Sweet Thursday with more sweety kind of paper. Buon appetite, Yannic! :)
the pronounciation of those author names were clutch :D
What I hear: nice transformer
If I lift box 1 onto shelf 1 and box 1 onto shelf 2 and box 2 onto shelf 1, then I can predict the effort in lifting box 2 onto shelf 2. Great analogy, thank you.
Love this video!
that's a nice trömformer
We like this transformer!
top work pronouncing the authors
How did you import pdf to onenote with such a good quality? The printout option generally inserts a very poor quality images of the pages.
It's definitely poor for me, too, it's right on the edge of being useful.
Nobody:
Yannic: uöu 1:07
By the power of Yannic, I rename you!
It is F because it is forward attention right? (Then it would fit with B being backward). It is not entirely right (A contain part of the forward attention) but I think that is the intention
One thing that I still couldn't understand is how backprop works in a transformer. Does someone have a good reference or video that explains it?
seems like you have fundamental gaps in ML.
It works like in any other neural network, by applying the chain rule to all involved operations
I bet you wish you could:
from previous_videos import SelfAttention
every time you make a video related to transformers
Does it even matter that the softmax doesn't commute, if the softmax is just a heuristic / hack in the first place?
Or is there something inherintly special about softmax in the transformer architecture?
I don't know if I'd call it "special", but I like to think of it geometrically. When you use a softmax, you make it so that the layer immediately after the softmax only has to model a "surface" that lives on the inner wedge of the unit cube (points with 1-norm
@@tchlux that is a good perspective
I was not expecting this until a month later.
But where do keys queries and value come from
They are learned.
So is that like a softmax over time, where it's valid kindof because over many iterations it's pulling random samples? Well hope a better way is found.
I struggle to believe that it actually is named as Nyströmformer.
I'll call it Nyströmer, as suggested and should be.
The name was designed to sound like 'the nice transformer'. So leave the name as is.
I have no idea what most of this means, but the lemma was funny
Didn't this came out like yesterday??
And?
@@ingusmant Just amazed by the speed Yannic can read, understand and produce these videos :o
You're right, it's already old now... ;)
just fyi, I tried to implement this the day before yesterday, but got NAN. I checked the code and realized that the formula (14) isn't accurate and also Z0 = AS/(||AS ||1||AS ||∞) should be Z0 = transpose(AS)/(||AS ||1||AS ||∞).
You mean the NAN is from your own implementation or our implementation? The accuracy to approximate pseudoinverse using formula (14) depends on how many iterations. Z0 is AS^T/(||AS ||1||AS ||∞). We will fix the typo in our update.
@@xiongyunyang9643 thanks for the reply. After I used correct (14) and correct z0, the nan is gone. Just fyi, formula (16) is also inaccurate, but it is easy to be noticed.
@@kicckicc Cool. Formula (16), similar to average local pooling, is to compute landmarks efficiently.
Hi there!
hi!
To be honest, I expected the Performer to be the ImageNet moment for transformers, but it seems there is still a long way to go and random Fourier features are not the best way to do the thing. Somewhat sad cause Performer's idea looked so cool and well grounded :(
Big leaps come through simple ideas like ReLU, convolution, drop-out, residual connections, self-attention... The moment an idea becomes too convoluted, it is less likely to be game changing.
What are you waiting for? If anything, the transformer revolution seems like it's come with even more force and speed than ImageNet.
*The NICEtrömer*
Indeed
OK i vote down for this work and i think the "asymmetric non local neural network for semantic segmentation" should be a better one.
Grüße an meinen alten ETH Kumpanen Yannic, richte Jonas schöne Grüße von mir aus :-)
Have you heard Michelle Srivastav but more probably you would hear Peter Chakraborty. If you can tell me the reason, you would know a lot about caste and region based targeting in India.
So, no body hates India when they are born, but, as you keep growing you see these divisions between people, majoritarianism, govt repression, targetting of intellectual class, poverty, corruption and then you start seeing trends in these concepts all in the name of highly preached American democracy and capitalism... But, Surely everything is a joke even misery.. Right Guys?
In the last twitter chart, it's quite surprising that Performer has the worst performance across the other efficient transformers. Is this also verified by other tasks?
Or other people maybe..
We have released the scores on individual LRA tasks. It will be interesting to see how Performer works for other tasks beyond LRA tasks.
Noice
I'd bet a million dollars that AGI, when discovered, uses frequencies of waves rather than any matrices.
mathematically ugly but somehow works well. I don't feel good in that both Nyströmformer and Performer rely on random sampling.
No, Nyströmformer does not rely on random sampling.