Mamba and S4 Explained: Architecture, Parallel Scan, Kernel Fusion, Recurrent, Convolution, Math
Вставка
- Опубліковано 6 чер 2024
- Explanation of the paper Mamba: Linear-Time Sequence Modeling with Selective State Spaces
In this video I will be explaining Mamba, a new sequence modeling architecture that can compete with the Transformer. I will first start by introducing the various sequence modeling architectures (RNN, CNN and Transformer) and then deep dive into State Space Models. To fully understand State Space Models, we need to have some background in differential equations. That's why, I will provide a brief introduction to differential equations (in 5 minutes!) and then proceed to derive the recurrent formula and the convolutional formula from first principles. I will also prove mathematically (with the help of visual diagrams) why State Space Models can be run as a convolution. I will explain what is the HIPPO matrix and how it can help the model "memorize" the input history in a finite state.
In the second part of the video, I will explore Mamba and in particular the Selective Scan algorithm, but first explaining what is the scan operation and how it can be parallelized, and then showing how the authors further improved the algorithm with Kernel Fusion and activations recomputation. I will also provide a brief lesson on the memory hierarchy in the GPU and why some operations may be IO-bound.
In the last part of the video we will explore the architecture of Mamba and some performance results to compare it with the Transformer.
Slides PDF and Parallel Scan (excel file): github.com/hkproj/mamba-notes
Chapters
00:00:00 - Introduction
00:01:46 - Sequence modeling
00:07:12 - Differential equations (basics)
00:11:38 - State Space Models
00:13:53 - Discretization
00:23:08 - Recurrent computation
00:26:32 - Convolutional computation
00:34:18 - Skip connection term
00:35:21 - Multidimentional SSM
00:37:44 - The HIPPO theory
00:43:30 - The motivation behind Mamba
00:46:56 - Selective Scan algorithm
00:51:34 - The Scan operation
00:54:24 - Parallel Scan
00:57:20 - Innovations in Selective Scan
00:58:00 - GPU Memory Hierarchy
01:01:23 - Kernel Fusion
01:01:48 - Activations recomputation
01:06:48 - Mamba architecture
01:10:18 - Performance considerations
01:12:54 - Conclusion - Наука та технологія
Brilliant - you are easily one of the most lucid and accessible teachers of deep learning.
this is absolutely FANTASTIC
I watched Albert Gu's stanford lecture on state space models/ Mamba, and it was a great high level overview.
But I really appreciate you taking it slower, and going farther into detail on the basic/ fundamental concepts.
A lot of us aren't mathematicians or ML engineers, so it's much appreciated to be helped along with those concepts.
Thank you for your kind words. Please share the video in your network, it would help me a lot. Thanks!
I rarely comment on videos, but this one was worth it. Thank you so much for such a clear explanation. You explained all the nuances that I previously did not understand in a very clear way. God bless you.
Your teaching approach is very good. You started from fundamental concepts and went deeper. This helped in gaining intuitions, understanding and avoid confusions in later part. Brilliant!
Understanding mamba couldn't be better than this !
I just read about mamba and wanted to find a detailed explanation video. All you covered in this video is everything I need, thank you so much, keep on cooking
Brilliant video! Really clear and with just the right amount of details!
I'm so glad i found this channel, you are a gold mine for such content, please keep them coming.
The whole lecture was very intuitive. Thanks for the efforts put into building this video!
Thanks for the amazing work as usual! Keep it up - this is probably one of the highest quality content on LLMs on youtube.
Very high quality, this is great. Hard to find good content like this. Thanks Umar!
This is gold! I really appreciate attention to the details. Thank you Umar!
🙌 Still working through Transformers from scratch. Hopefully a Mamba from scratch is in the future!
作为一个来自北京的大学生,谢谢你分享的这篇文章解析!best wishes!
After I saw this lecture, I subscribed your channel. It is the most easy to understand Mamba lecture I've seen.
Thanks for explaining it in a way that anyone with some high school math background can understand, keep this up!
Excited for the video. I was searching for a video on Mamba and today I saw this. Your Transformer video helped me alot previously. Keep it up!
Thank you so much for your detailed video and thoughtful thinking of you that we will need help with the equations! You are a savior!
Thank you. I appreciate the approach you took in explaining the major concepts.
Thanks a ton! Excellent explanation and great analogies to introduce the more advanced material. This is an absolute masterclass on how to teach advanced material.
Excellent video! Thank you. I have watched a few videos about mamba and this one was by far the best.
Thank you so much. Lots of useful details yet you curate through them at such a good tempo with easy to follow examples
This is one of the best ML explanations I've seen even though I didn't understand all of it but I definitely learnt something new.
Thank you so much for your efforts to make such an amazing video on Mamba architecture !!
Great explanation!! This is the first video that mekes me comprenhad the whole mamba paper.
As others have mentioned, you have a keen ability to explain difficult topics succinctly and completely. Keep up the awesome work! I could of used this when I took a class on time-series modeling! Hah!
This is the best deep learning video I've ever seen. I will surely use some of your slides to teach my students
OMG ! this is such as amazing description , you made my day
Best MAMBA video at the moment!
Salute to consistency
Thanks Umar sir.
This is really helpful for another talk I am doing on Mamba. Thank you very much for putting this out.
Amazing explanation. I love this video because it covers sufficient depth and explains each concept with proper examples. I've subscribed instantly, and look forward to more such videos on recent papers.
very good video!!! thanks a lot for your efforts!!!!
i always eagerly wait for your explainer. they are 🤯.
thank you :)
This video is of great help!!Thank you very much.
I did learn a lot! Many thanks for making this video.
Really an amazing video! You save me a lot of time! Thank you!
Amazing! So detailed. Well done sir
Love it! Keep up the amazing work.
Thanks Umar! 🥰Very amazing learning material for Mamba!
wow that's a great explanation , thanks for the efforts!
Great explanation!
Amazing video.
Thanks man! This helped me a lot
Absolutely amazing 🎉
Even I understood much of this. I have no education. Thank you! Mamba looks really cool. Especially like the long context and further refinement. It looks like a model that could be made to learn as it goes. Plasticity potential
Brilliant explanations. Thanks.
Ohhh Man, why did I discover this gem so late :( This guy is a rockstar!
excellent work! Thank you
Very nice talk, thank you.
absolutely fantastic
Thank you for this great and smooth explanation. I think the model you are showing at 36:14 is valid if matrix A ( and B also to send each input directly to the corresponding ssm) is diagonal. Now in this way each hidden state at different canonical direction ( or different element of the vector) is independent of each other. So if A is not diagonal then assuming an eigen decomposition exist, then we may say there exist an equivalent ssm which can be represented independent ( if we change the basis to eigen basis) .
Amazing video
Thanks a lot that was very useful!
Thanks for the awesome content! Hope the next one will be about DPO and coding it from scratch ❤
You're welcome: ua-cam.com/video/hvGa5Mba4c8/v-deo.html
@@umarjamilai Thank you!!! You're so talented at research and teaching!!!!
Bellissimo video, grazie!
Grazie a te!
Fantastic
You're making very useful content, thank you!!! Maybe you could consider using larger text, so that one could read easily from a phone. Also a plus would be if the presentation were white on black (or bright color on black), it is less tiring to look at a dark screen for long periods of time.
you are the best.
Great lecture! It is easier for me to understand the work with your lecture.
Can you give one for Reinforcement learning?
amazing explanation
waiting for new video
please upload soon
Jazakallah Khairan
Excellent video! I'm looking forward if you do a coding one. Thank you so much for your work to the AI community
Coding one is not very interesting, because the most interesting part is the selective scan algorithm, which is a CUDA Kernel. The architecture is not so different from any other language model. Of course it would be super cool to code the CUDA kernel from scratch ;-)
i've just started watching but guess this vid'll be much usefull
Thanks, i hope you explain rwkv
One of the best! I have one question if we apply conv in S4 on sequence of length L, what will be size of conv layer?
Thanks!
Thank you
Brilliant! 太棒了!
谢谢你!
Very good lecture ! Thank you very much for putting this for free on youtube :) I have question though, if my understanding of the HiPPO framework is correct, the A matrix is built to uniformly approximate the input signal (name HiPPO LegS in the paper). "Our novel scaled Legendre measure (LegS) assigns uniform weight to all history [0, t]". But however at 41:49 you explain that it is decaying exponentially similarly to HiPPO LagT. Do they opt for HiPPO LagT when moving to s4 and Mamba or am I missing something ?
As suggestion for your next video you can cover GTP decoder based Multi-model model.
Great explanation. Very through. Loved it. I struggled with understanding the SSM paper. You explained all the bits beautifully
you are so smart!
great😀😀
Hi, I was wondering if you could explain 36:40 a bit more where you talk about multi head attention. From what I understand each head in multi-head attention each head looks at the whole input vector. Our key value and query matrices are all of size Dx(head_size) where D being dimension of embedding, so when we find key say we do key = X @ key_matrix where X is an CxD dimensional matrix, C is context len. This means each head looks at the whole dimension of the embedding D and represents it a head_size vector meaning that arrows going into each head should point at every single input dim.
Awesome! 讚!
Please can you make video on optimizers like adam,adagrad,...
Hi Umar, amazing video. You are the best teacher. You are Karpathy 2.0. :) Please make a video on DPO :)
Done: ua-cam.com/video/hvGa5Mba4c8/v-deo.html
@@umarjamilai thank you so much 😃
Need more code from scratch videos!
You are amazing! How did you learn all this?
Danke!
Thank you very very very much for your generous support! Let's connect on LinkedIn!
you're extremely underrated, I don't think I'll be able to use much valuable info tbh.
Hi Umar can please upload the videos regarding details explanation of GPT architecture
I have one question in terms of the example which you provided, 'the number of buddies'. I think the function should be like this : b(t)=5squ(3)^λt . please comment to me if I am wrong.
Hey! Thanks for the details in this video.
I'm confused about the HiPPO matrix, which seems to be fixed given N?
However the paper stated that delta, A, B, C are all trainable. What did I miss?
is HiPPO the initialization of A?
Yeah, just the initialization
Thanks for clarification.
Could you please further explain how the parameter of A is (D, N) in S4? If I have D*SSMs, one for each embedding dimension, shouldn't A have DN^2 parameters?
this is so far the only video I found that described the math part in the mamba model. thanks a lot.
One small issue. In 37:00, for the attention model, you mentioned each head takes only a portion of input dimensions, can you confirm this? I believe each head actually use all input dimensions.
It might be true for LLMs, but I believe this is not true for the original transformer model.
Hello! First of all thanks for the kind words.
Yes, in multi-head attention, the idea is that each head sees the entire sequence, but a different portion of the embedding of each token. This is to make each head relate tokens in different ways. This mechanism is described in my previous video on the Transformer model.
Can we run Mamba via normal GPU?
PLEASE explain spacetimeformer
You're GOAT
GOAT? 🐐 Beeeehhh 😅😅
@umarjamilai yeah you're Greatest Of All Time (GOAT)
Umar please do trian mamba from scratch video. Everybody wants that (even on mamba github there are alot of requests but authors told they do not published training loop). I hope and believe you will fix this knowledge gap.
Orange 🧡 place brought me here.
What's orange place?
Listened for about half hour, didnt get a clue of this topic, stopped it! Thanks for the attempt though
میں وی کجھ سیکھنا چاہندا
Am I watching the total rip off the Fourier Transform and Z-Transforms in all of AI/ML? The Differential Equation is brute force. We use Laplace.
Yeah, you can do everything in the S space with Laplace transform, but most ML researchers do not have a controls engineering background, so we stick to differential equations 😉
Thanks!