As of 2024, the latent diffusion paradigm has been very successful in these 'natural' modality tasks (sound, images, video) and the paradigm is now being applied to 3D spatial awareness. We've actually been in the post-transformer era for a while (1-2 years)! I am wondering where Gu's work fits in here-- perhaps these Mamba models will produce better latents for extremely long-context video and spatial point cloud data? Will stay tuned. Thanks for the talk!
the problem with latent diffusion (something like DiT) is that it's too slow. especially with high bandwidth data like images. Mamba will help in the encoder part. but i don't see how to benefit from it in the decoder part. i would suggest you check VAR(Visual Autoregression). it works by regressing the next resolution instead of out of noise. around 20x faster with better performance.
Interesting to hear that the author of Mamba feels that attention is indispensable. My initial thought was that Mamba is a full replacement for Transformers, but it seems that Gu believes attention layers are still necessary for the model to be able to reason at the level of tokens. Perhaps hybrid models like Jamba are the way to go.
Well seems like Gu tries to find theoretical relations between attention and SSM in Mamba-2. For me, Mamba even doesn't look like SSM anymore to be honest.
Thanks Albert and Sam! Surprisingly insightful for someone researching the Mamba architecture right now!
Super happy to see you on YT! Been missing you since Alphabet scraped Google Podcasts! Awesome content.
As of 2024, the latent diffusion paradigm has been very successful in these 'natural' modality tasks (sound, images, video) and the paradigm is now being applied to 3D spatial awareness. We've actually been in the post-transformer era for a while (1-2 years)! I am wondering where Gu's work fits in here-- perhaps these Mamba models will produce better latents for extremely long-context video and spatial point cloud data? Will stay tuned. Thanks for the talk!
the problem with latent diffusion (something like DiT) is that it's too slow. especially with high bandwidth data like images. Mamba will help in the encoder part. but i don't see how to benefit from it in the decoder part. i would suggest you check VAR(Visual Autoregression). it works by regressing the next resolution instead of out of noise. around 20x faster with better performance.
@@mephilees7866 Excellent, thank you!
Interesting to hear that the author of Mamba feels that attention is indispensable. My initial thought was that Mamba is a full replacement for Transformers, but it seems that Gu believes attention layers are still necessary for the model to be able to reason at the level of tokens. Perhaps hybrid models like Jamba are the way to go.
Well seems like Gu tries to find theoretical relations between attention and SSM in Mamba-2. For me, Mamba even doesn't look like SSM anymore to be honest.
Brilliant, the tokenizer ought to be a learned parameter that coevolves in response to task.
Hi, may I know how to add your channel to Apple Podcast?
Hi. You can follow our channel here: podcasts.apple.com/us/podcast/the-twiml-ai-podcast-formerly-this-week-in-machine/id1116303051
@@twimlai Thank you for your reply. But I cannot visit the site; the URL seems invalid
Strange. Works on my end. Try twimlai.com/podcast and look for the button on that page.
@@twimlai Thank you very much. But it's still not working. So I use Spotify now😃
great interview
How about vision?