ICLR 2021 Keynote - "Geometric Deep Learning: The Erlangen Programme of ML" - M Bronstein
Вставка
- Опубліковано 14 тра 2024
- Geometric Deep Learning: The Erlangen Programme of ML - ICLR 2021 Keynote by Michael Bronstein (Imperial College London / IDSIA / Twitter)
“Symmetry, as wide or as narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection.” This poetic definition comes from the great mathematician Hermann Weyl, credited with laying the foundation of our modern theory of the universe. Another great physicist, Philip Anderson, said that "it is only slightly overstating the case to say that physics is the study of symmetry."
In mathematics, symmetry was crucial in the foundation of geometry as we know it in the 19th century. Now it could have a similar impact on another emerging field. Deep Learning success in recent decades is significant - from revolutionising data science to landmark achievements in computer vision, board games, and protein folding. At the same time, a lack of unifying principles makes it is difficult to understand the relations between different neural network architectures resulting in the reinvention and re-branding of the same concepts.
Michael Bronstein is a professor at Imperial College London and Head of Graph ML Research at Twitter, who is working to bring geometric unification of deep learning through the lens of symmetry. In his ICLR 2021 keynote lecture, he presents a common mathematical framework to study the most successful network architectures, giving a constructive procedure to build future machine learning in a principled way that could be applied in new domains such as social science, biology, and drug design.
Based on M. M. Bronstein, J. Bruna, T. Cohen, P. Veličković, Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, arXiv:2104.13478, 2021 (arxiv.org/abs/2104.13478)
Accompanying blog post: towardsdatascience.com/geomet...
More information: geometricdeeplearning.com/
Animation: Jakub Makowski - Наука та технологія
The presentation quality, content coverage, and animation here is incredibly marvelous! This has certainly set a gold standard for future talks. Thanks a lot for putting this together.
Couldn’t agree more. Depth, breadth and effectiveness of communication are spot on.
What a great keynote, both content-wise and in terms of the visuals. 👏 A good side-product of virtual conferences is certainly the production value of scientific talks going up.
Incredible, really enjoyed this keynote. Agree, one of the best presentations on ML I’ve seen yet. I’m really happy to see the emphasis on clarity to a general audience with such well-crafted illustrations of concepts.
Presentation mastery! You managed to boil things down to the most salient intuitions, all the while covering such a wide breadth of topics! This has me amped to dive into your papers (im in fmri neuroscience, where graph-based predictive modelling has been mostly ineffectual thusfar)
Finally!!! Professor Bronstein started posting to his UA-cam channel!!!
P. S. And what a start, too! I've been looking for this presentation for weeks now.
I wish I could understand all the details, but my education only takes me so far understanding the concepts you're going over. I am a newbie ML enthusiast. I really do appreciate the animation, it is nice to follow it.
This was deeply thought provoking and wonderfully inspiring.
This talk is so amazing. I really like your interpretation of mathematical formulas, very clearly. Thanks for your great work. Hope you make more videos like this. One more time, thank you very much.
This is the best presentation on machine learning I've ever seen. So enjoyable.
This should be a gold standard of keynote talks. Amazing! 👏
This is literally the best presentation about machine learning I have ever seen. Thank you for your marvelous work!
It is very intriguing research and graphically well presented.
I wonder what relationships are there between this unifying geometric perspective of deep learning and the random finite sets (stochastic geometry, poison point processes), which are now the rave in the multi-object tracking community.
This presentation is also slightly infuriating in that it goes over very deep concepts very fast. Regardless though, amazing work!
This approach to Geometric Neural Nets is like a potential Nobel prize winning grand unification theory (GUT) unifying all the neural net architectures from ANN, CNN, RNN, Graph-NN, Message Passing (MP-NNs) neural nets and Transformers (Attention Neural Nets). Wonderful video !! Just like M-Theory when there is too much innovation accumulating over time, a simplifier needs to be born who can merge and unify all of them into a single more general purpose abstraction.
This is amazing. I hope you make more videos like this again!
As a computer science student now preparing for his ML course exam. I was just blown away by how all machine learning algorithms are related. Beautiful, stunning work.
It takes a semester for us to comprehend this marathon talk, Sir. Great visionary talk. Thank you Sir
I feel sad that I left this field for financial reason. But I keep watching these videos
Amazing stuff! Hope we can interview Prof. Bronstein on our show soon 😀
would be honored
The incredible Michael Bronstein is on UA-cam !! This is Awesome
This is EPIC! looking forward to more of this great material.
Very interesting perspectives on deep learning and seamless transition from one concept to another. Truly a master piece of scientific presentation. Thank you so much for posting it.
Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges is of great importance for my master's degree. Great presentation, is an honor.
This inspires me to continue my education. My brain is itching to learn more!
I was amazed by your presentation, good job. But what amazed me was that I was able to understand in detail everything you explained. 35 years ago I studied physics and mathematics and learned all aspects of what you told in this video without ever realizing it could be applied to AI as well. Like you I was confused about the why of convolution, thanks for giving me the light !
Well done! Clear and visual! Please more like that! Thanks a lot!
Absolutely Amazing Prof Bronstein!
Thank you for such an amazing piece of content.
Спасибо, Михаил! Одна из лучших презентаций, которые я видел.
i was in awe to see how underlying maths unifies DL techniques. Daresay community NEEDS a similar but in-depth deconstruction of particular topics. There are a lot of knowledgeable people in the comments, someone please make it happen
Thank you for uploading.
I hope it will talk about the coding part too.
Great work... this has the chance to advance DL considerably, especially detecting "intrinsic features" which will solve many existing problems
This is real science !!! Thumbs up!
Great, concise, and very explanatory presentation. Thank you very much for uploading this content.
Thank you very much for your great talk!
very good coverage. thank you, Prof. Bronstein
Beautiful presentation. Got some ideas to test.Thank you.
I'm in love with this presentation format! Would you consider sharing the Illustrator and After Effect project files? I'd like to learn how to do this and have no clue where to start!
Wow, you took it to the next level!
Super informative and impressive.
Very nice animations make it a lot easier to follow. Thanks!
Thank you for this great presentation and for sharing it with the common public.
Presentation quality is stuning
Such an amazing lecture! Thank you very much :)
I am quite excited about this field. Traditionally the innovation in biotech engineering was hampered by ethical concerns. With this technique we can quickly innovate without any political ramification. This is quite akin to the growth of internet itself
Wow! this is an excelent presentation, I guess your classes are something like this, and your students are very lucky to have you as a professor.
Thank you! amazing presentation!!! I giggled a little when seeing 2:40
Amazing. I'm speechless.
Wow. Just. Wow.
The quality of this presentation is incredible. The animations enabled me to grasp concepts (almost) instantly. So incredibly helpful for my current paper. Thank you ever so much for the money, time, and effort it took to produce a video of such exceptional quality.
Thank you. Such comments are the best motivation to continue doing more!
I must admit, I came to this link accidentally. The presentation is a master piece. Keep it going. Following.
I wasn't sure at first as to how you wanted to connect the different geometries with deep learning , but as the video went on, I could see what you meant. And now, I am thinking about how it can be applied in emotion classification project I'm interested in. Thank you for the general insight, It would be incredibly awesome if you can attach some git works.
Thank you so much for this. After Sunday lunch, Idling through youtube, i was dragged down a nD rabbit hole, through some maths and psycology history fo some hary transformations of a non-trivial representation into a managable ones, and how they can improve the lives of astronomers, computer gamers, and pharmacologists,. How mapphg foods and drugs could alleviate diseases;. How computers could troll through posts and comments to find a small subset of interesting ones.. Even youtube itself joined in, and removed adverts, brexit rants, music, and chess blogs from my starter screen. What a great life you lead!
Just wow 💯 ; this is inspiring me to learn more ,. Amazing presentation 💫
This is amazing sir..Hopefully this will motivate the student community to take up mathematics very seriously
Such an inspiring presentation!
A great presentation professor. Reminds me of 3blue1brown
OK, I now need a Hinton, Bengio, LeCunn & Schmidthuber print. In an antique frame.
Now this was enlightening !
Imagine how much time the presenter has spent preparing this presentation.
Awesome Thanks!
This is amazing presentation 👍👍👍
The introduction reminds me talks from S. Mallat where he was already in 2012 showing in one hand the underlying symmetry invariance that we have in his wavelett scattering system and on the other hand the analogy of this system with deep CNN. And concluding that deep learning architecture might learn symmetry groups invariance like learning the groups of cats, dogs, tables etc.. I like very much this group theory approach, which is not often discussed in literature so far
Indeed we cite Mallat in the book - his paper with Joan Bruna on scattering network established that CNNs are not only shift-equivariant but also approximately equivariant to smooth deformations
absolute gold
This was wonderful!!!!!!!
It's year 2030. MLPs are SOTA on all domains imaginable to human mind.
MLP AGI whispers: Michael didn't mention me in his ICLR keynote.
Paperclips.
interesting.... I'm working on the same thing independently.... I believe this is ultimately the theory of everything.
Great talk!!!!
This is really amazing!
Is one of the possible domains of GDL going to be in any instance of a dynamic system? For instance not just proteins but interactions between molecular pathways? Or meme propagation networks?
Very interesting I all ways have that question is there a way to indefinitely transformation on deeplearning this video shows how it's done thank you like to more on this topic but it's hard for me to understand all those mathematics.
My old math teacher would break out in a sweat of disbelief seeing that higher mathematics can be used to recognise cats !
Super cool talk!!
wonderful work.
Master piece!
Oh yeah, RealSense, I've been working with them in image recognition, trying to build something similar to Complex Yolo, but in a more engineering way. However, the quality was not suited for the harsh conditions we were exposing the devices to (pig stall). It was also the time when the first extensive neuronal network libraries became available, and I've said that in a few years the tech calibration of the camera will be just replaced by a neural network. And, broadly speaking, that's what drives my current research.
Love at first sight... ❤️
Thank you for the great video.
I wonder what Stephen Wolfram thinks about this ;-)
28:38 - 3D sensor to capture face - 10 years ago - Intel integrated 3D sensor into their product
30:17 - we don’t need a 3D sensor now - we can use 2D video + geometric decoder that reconstructs a 3D shape
36:50 - tea, cabbage, celery, sage
omfg, wow. what a presentation!
Thanks for the video. I wanted to know more about this view of machine learning.
Check our proto-book on which the talk is based: arxiv.org/abs/2104.13478
@@MichaelBronsteinGDL thanks
Full fledged AR and VR products are gonna be launched soon is one of the takes. Metaverse is here
Awesome!
Very nice presentation
This is one of the most beautiful presentations I have ever seen in my life. I'll be honest here- I did not understand much, but I'm truly inspired to learn the material. Professor Bronstein, would a deep learning / signal processing background be enough to pick up this material?
I would give a biased response, but probably our forthcoming book we are currently writing (a preview is available here: arxiv.org/abs/2104.13478)
Great talk! And outstanding visuals! How were they made?
You could make this in After Effects
This is amazing.
A very cool presentation, just wanted to ask if the scale transformation described at 09:31 has anything to do with renormalization groups methods in physics ?
I don’t see an immediate connection
i get it what you say; good point imo
Absolutely great presentation! What software was used to create these animations? :) Thanks
Damn! That's awesome! As a side note, may I ask what was used to create the visuals and animations for this talk? They are gorgeous!
Adobe AE and two months of work of two professional designers
@@MichaelBronsteinGDL That would have been my guess, professional designers involved. Thanks!
@@MichaelBronsteinGDL Great animations, and thank you for your efforts to share this valuable knowledge.
ΕΚΠΛΗΚΤΙΚΟΣ!!
Only got here from other videos on the topic. Nice presentation, one that assumes a bit more linear algebra and group theory fundamentals (but indeed one only needs the very basics of those fields + basics of analysis to follow the concepts in ML/DL), but gets a bit more into actual details compared to other videos I have watched on the same topic, which I appreciated. If only there weren't so many self-promoting plugs all over the place throughout the video, it gave me the impression that the actual science on the video served as an instrument for own work promotion a bit too much. I guess it might be a cultural trait of the field and this is how things work, but from what I gathered from the comments, active or former researchers in the field (I don't qualify as such) already know not only you, but your work as well (which I have absolutely no doubt to assume that is indeed very noteworthy), already prior to the video.
Subscribed.
I think invited speakers are invited exactly because of their expertise, and it is expected to talk about own work (hence the "self-promoting plugs", which are some of the first works in the field that we did with students and collaborators). In the book we show a more balanced overview, however for the video I chose those works I relate to more.
Wow, that's so dope!!! Thanks for this great production quality and delivery Michael!
Btw, would love to have you on my podcast talking about GDL!
this is amazing
Oh. My. God.
It a shame that I am too dumb to deeply understand everything that was said, nevertheless even what I did get is astonishingly fascinating!
I so regret not learning harder in my university days, may be I would have had a chance to work on something this impactful and motivating.
Nice!
Thanks
awesome!!
This presentation is as great as the talk itself. What software did you use to create the presentation graphics?
was done by professional designers. photoshop/illustrator/after effects
Where can I find more information on the project that helps classify the molecules on plant based foods??
Here is a blog post: towardsdatascience.com/hyperfoods-9582e5d9a8e4?sk=d20fe73c7d9ecb62dd3d391a44d4ef7f
My mind was blown away when I saw that even food preparation can be represented as a computational graph with cooking transformations as edges and optimize to maximally preserve the anti-cancer effect 🙌.
I heard "long range interaction" interesting
Great presentation. Can you tell me how the software you use to animate the graphs?
AfterEffects
Excellent generalisation of deep learning. I can see Linear Algebra, Graph theory, Group theory and many other math branches intersecting with physics, computer graphics and biology. This is truly a gem of ML.
BTW, what's on the y-axis of this graph at 18:58 ?
The task is regressing the penalized water-octanol partition coefficient (logP) on molecules from the ZINC dataset. Y-axis shows the testing Mean Absolute Error.
Time base from data to force altering lead to transformation and amphomorism. Like water it remain water in different temperature so it survival all economic, political, and religious condition and remain an kind, compassionate, and creative wise human
👏👏👏
It is indeed a very high-quality high-effort presentation. But what really annoys me in the subject is that deep learning people really like to acknowledge weaknesses of their neural network only when they're attempting to solve them. And when they are not, they like to pretend that they don't exist and their approach is flawless.
Like this graph isomorphism problem for example: it is a major problem in representing a graph in any linearized fashion, but I read many papers that just go on boasting how well their blabla-net performs instead of talking of these limitations. A lot of DL research seems to be hype-driven rather than problem-driven.
I agree to some extent, and here is one example related to graph isomorphism: it's easy to talk about expressivity, much harder to show any results about generalization power. To the best of my knowledge, very little is currently known about how GNNs generalize.