#036

Поділитися
Вставка
  • Опубліковано 27 чер 2024
  • Today we had a fantastic conversation with Professor Max Welling, VP of Technology, Qualcomm Technologies Netherlands B.V.
    Max is a strong believer in the power of data and computation and its relevance to artificial intelligence. There is a fundamental blank slate paradigm in machine learning, experience and data alone currently rule the roost. Max wants to build a house of domain knowledge on top of that blank slate. Max thinks there are no predictions without assumptions, no generalization without inductive bias. The bias-variance trade-off tells us that we need to use additional human knowledge when data is insufficient.
    Max Welling has pioneered many of the most sophisticated inductive priors in DL models developed in recent years, allowing us to use Deep Learning with non-Euclidean data i.e. on graphs/topology (a field we now called "geometric deep learning") or allowing network architectures to recognise new symmetries in the data for example gauge or SE(3) equivariance. Max has also brought many other concepts from his physics playbook into ML, for example quantum and even Bayesian approaches.
    This is not an episode to miss, it might be our best yet!
    Panel: Dr. Tim Scarfe, Yannic Kilcher, Alex Stenlake
    00:00:00 Show introduction
    00:04:37 Protein Fold from DeepMind -- did it use SE(3) transformer?
    00:09:58 How has machine learning progressed
    00:19:57 Quantum Deformed Neural Networks paper
    00:22:54 Probabilistic Numeric Convolutional Neural Networks paper
    00:27:04 Ilia Karmanov from Qualcomm interview mini segment
    00:32:04 Main Show Intro
    00:35:21 How is Max known in the community?
    00:36:35 How Max nurtures talent, freedom and relationship is key
    00:40:30 Selecting research directions and guidance
    00:43:42 Priors vs experience (bias/variance trade-off)
    00:48:47 Generative models and GPT-3
    00:51:57 Bias/variance trade off -- when do priors hurt us
    00:54:48 Capsule networks
    01:03:09 Which old ideas whould we revive
    01:04:36 Hardware lottery paper
    01:07:50 Greatness can't be planned (Kenneth Stanley reference)
    01:09:10 A new sort of peer review and originality
    01:11:57 Quantum Computing
    01:14:25 Quantum deformed neural networks paper
    01:21:57 Probabalistic numeric convolutional neural networks
    01:26:35 Matrix exponential
    01:28:44 Other ideas from physics i.e. chaos, holography, renormalisation
    01:34:25 Reddit
    01:37:19 Open review system in ML
    01:41:43 Outro
    Pod version: anchor.fm/machinelearningstre...
    Ilia Karmanov, Senior Engineer, Qualcomm Technologies Netherlands B.V.:
    / ilia-karmanov-09aa588b
    Professor Max Welling, VP of Technology, Qualcomm Technologies Netherlands B.V.:
    / max-welling-4a783910
    Probabilistic Numeric Convolutional Neural Networks (Marc Finzi, Roberto Bondesan, Max Welling)
    arxiv.org/abs/2010.10876
    Quantum Deformed Neural Networks
    arxiv.org/abs/2010.11189 (Roberto Bondesan, Max Welling)
    Qualcomm AI Research is hiring for several machine learning openings, so please check out the Qualcomm careers website if you’re excited about solving big problems with cutting-edge AI research - and improving the lives of billions of people.
    www.qualcomm.com/company/careers
    We used a clip from Qualcomm's official video on Gauge Equivariant Convolutional Networks with permission: • Our Gauge Equivariant ...
    The drone footage is from my friend Marcus White -- • Dubai - Cinematic FPV and used with his permission
    Intro music: / homeward
    Disclaimer: We have had official approval from Qualcomm to publish this video, and they have not paid us anything!
    #machinelearning #deeplearning

КОМЕНТАРІ • 62

  • @akashkumar-jg4oj
    @akashkumar-jg4oj 3 роки тому +55

    I really want to appreciate and acknowledge the amount of effort you put into your videos. From great introductions to great discussions. Thanks for sharing this with the world.

    • @betoprocopio
      @betoprocopio 2 роки тому

      y e s
      after I saw my first vid here, I was talking about the quality of it ALL weekend

  • @AICoffeeBreak
    @AICoffeeBreak 2 роки тому +6

    What I absolutely love about Prof. Welling's appearance here is how he talks about projects he does with his students (like any Prof / supervisor), but also *names* the students, makes them *visible* . This is almost unique behaviour in a sea of supervisors giving keynotes and talks but almost hide the hard work of their students by saying "we did this", "one of my students tried that". But Max Welling sees the evident situation that a) he has tenure / stable income, influence in the field and everything, b) it is his students that still have to make themselves a name and place in the field. So he helps them, by highlighting who they are and how much it is *their* work too.
    Btw 0:51 is my absolute favourite moment from any MLST episode! 😂

  • @ideami
    @ideami 3 роки тому +24

    Fantastic episode, this channel has become my favourite ML channel online, the content is deep, the style is refreshing, the mix of multiple minds debating back and forth in an open and respectful yet bold way is simply brilliant (and a key positive differentiating factor in my view); I am passionate about generative AI and it was great to learn about Max's views on the topic (and causality), and yes, a conversation between Max Welling and Karl Friston would be something; also super revealing how physicists are bringing their insightful perspectives onto the ML field, I have experienced this personally when interacting with some of them, another reminder of the importance of how domain experts in other areas can help shake things in the ML community (and physicists in particular, with their deep and vast body of knowledge, are in a great position to do this); the quantum stuff, with all the open questions attached, was intriguing, challenging and provocative; brilliant episode, I hope you keep doing this for a very long time! ;)

  • @pandatory1108
    @pandatory1108 3 роки тому +1

    I remember attending a short symposium a couple of years ago where Professor Welling was a speaker. There were other eminent speakers like Geoff Hinton, Terry Sejnowski, Radford Neal, Illya Sutskever there as well. But I distinctly remember Prof Welling's lecture because he took two fields which I was largely unfamiliar with (theoretical physics and equivariant representations) and explained them in a manner I could largely grasp.

  • @fredxu9826
    @fredxu9826 3 роки тому +1

    glad that I found this channel: what a gem.
    Listening to these talks really aid in motivation and intuition.

  • @ziddlidos
    @ziddlidos 3 роки тому +2

    Great videos guys, very inspiring! I’m about to start a PhD in machine learning and it is very exciting to see Welling’s intuitions about the future of the field and the community. Cheers!

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому +9

    Wow, why did it take so long for UA-cam to recommend this channel.

  • @therealjewbagel
    @therealjewbagel 3 роки тому +1

    Definitely becoming one of my favorite AI/ML channels. Glad to see some more exposure with Max and co's work!

  • @reis1996
    @reis1996 3 роки тому +2

    i like this! especially the fact that you don't interrupt the person being interviewed in the middle of a thought! keep going!

  • @vahidhosseinzadeh4630
    @vahidhosseinzadeh4630 2 місяці тому

    Thank you for the great content. I was physicist for a long time mostly working on symmetries and changed to deep learning which is to my surprise is actually physics again 😊

  • @oncedidactic
    @oncedidactic 3 роки тому +1

    Obviously great convo and thanks again! In particular I am really happy this time to get a review of how physics “basics” are being applied to ML, this was an amazing high level “report” in that regard.
    The way I see it, the last 2-3 centuries of physics research forged really great math raw alloy into sharp swords but were still training everyone how to swashbuckle. Meanwhile alloy mining and sword skunkworks continues apace.

  • @russelldicken9930
    @russelldicken9930 3 роки тому

    Wow! This is the first episode I have seen from Machine Learning Street Talk. It won't be the last!
    I nearly fell off my chair on hearing this discussion. At aged 70 I have forgotten much of the maths I studied earlier, but have tried keeping up using Jupyterlab. This talk is directly in line with my views, particularly on Lie Groups and manifolds. Looking forward to more of this. More from Prof Welling please.

  • @dwhdai
    @dwhdai 3 роки тому +1

    how have i not come across this podcast/channel until now. this is incredible content and quality!

  • @markryan2475
    @markryan2475 3 роки тому

    Another fantastic episode - jam packed with thought-provoking ideas, great questions, and a really interesting guest. Thanks so much for taking the time to put this episode together and to share it.

  • @machinelearningdojowithtim2898
    @machinelearningdojowithtim2898 3 роки тому +12

    First! 😎🙌👌 So excited about this one, is it our best yet or what? 😃🎄😜

  • @manojgambhir6737
    @manojgambhir6737 3 роки тому

    Extraordinary couple of hours of listening. Amazingly well done, MLST team.

  • @3145mimosa
    @3145mimosa 3 роки тому +1

    I learn a lot from watching this interview. Thank you so much!

  • @fatmaguney3598
    @fatmaguney3598 3 роки тому +7

    who does these amazing visualizations? congrats!

  • @jimlbeaver
    @jimlbeaver 3 роки тому

    So many great ideas in this talk. I’m really glad I found this channel. Keep up the great work

  • @ReelSky
    @ReelSky 3 роки тому

    Love this, thanks for using the footage 🙌🏼

  • @Georgesbarsukov
    @Georgesbarsukov 2 роки тому

    That intro was the most exciting ML intro I've ever heard.

    • @Georgesbarsukov
      @Georgesbarsukov 2 роки тому

      This guy makes me want to leave my faang company to work for his lab. In fact, I'm going to apply.

    • @Georgesbarsukov
      @Georgesbarsukov 2 роки тому

      Definitely my favorite episode so far.

  • @ethanconnelly8794
    @ethanconnelly8794 3 роки тому

    So glad this was recommended. Cheers for this.

  • @vivekmittal2043
    @vivekmittal2043 3 роки тому

    This conversation is awesome! Amazing questions and amazing answers. Thanks for creating this.

  • @3nthamornin
    @3nthamornin 3 роки тому

    Your videos are so informative on topics that are very difficult to understand. Thanks

  • @TheAIEpiphany
    @TheAIEpiphany 3 роки тому +1

    54:07 The interesting thing about our visual system is that it probably doesn't have this rotational equivariance/invariance explicitly built-in. Try rotating the book in front of you while reading and you'll have a hard time reading it, right?
    So it's "obviously" not some perfect mathematical function that's built-in our brain architecture. That, however, is not an argument that we shouldn't do it like that. We can do better than evolution for some things that are of interest to us, and we demonstrated that with various tech advancements like the "airplanes don't fly like birds but are faster and that's what we care about" kind of argument.
    Absolutely loved the show! You guys nailed it. Hey Tim, how long does it take for you to edit this thing? The intro is crazy I can imagine the time that went into preparing this one.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  3 роки тому +1

      Thanks for commenting Aleksa! "how long does it take for you to edit this thing" You don't want to know 😂

    • @TheAIEpiphany
      @TheAIEpiphany 3 роки тому

      @@MachineLearningStreetTalk hahah got it, I won't tell anybody. 😂 Anyways, keep it up it's great!

  • @Daniel-ih4zh
    @Daniel-ih4zh 3 роки тому +12

    I feel like this direction in ML is the most productive currently.

  • @mikemihay
    @mikemihay 3 роки тому +4

    Great content!

  • @marceldesutter962
    @marceldesutter962 3 роки тому

    This channel is insane. I'm glad I just stumbled upon this.

  • @yashmandilwar8904
    @yashmandilwar8904 3 роки тому +5

    Please get Ilia on the show!

  • @Hypotemused
    @Hypotemused 3 роки тому +3

    Source of the Alphafold2 PDF that Tim shows @8:53 - www.predictioncenter.org/casp14/doc/presentations/2020_12_01_TS_predictor_AlphaFold2.pdf

  • @ramzisofiane5909
    @ramzisofiane5909 2 роки тому

    Thanks for this amazing content !

  • @abby5493
    @abby5493 3 роки тому +3

    Amazing video 😍😍

  • @JTMoustache
    @JTMoustache 3 роки тому

    My favorite episode so far - Invariance is all 🦾

  • @ethanconnelly8794
    @ethanconnelly8794 3 роки тому +1

    I think quantum neural nets + room temp SQUIDS + fusion could set off the singularity.
    Apriori input shapes will be the difference between good and evil and we need to be careful.

  • @amitkumarsingh406
    @amitkumarsingh406 3 роки тому +1

    I love physics x ML research. Proves we're edging closer to the simulation.
    Ps. great content guys. This channel keeps me motivated. Continue with your work!

  • @norik1616
    @norik1616 3 роки тому +4

    "The reviewers are a bit too grumpy. If it's not a completely finished idea, they will find the hole and they start pushing on it."
    - almost every Yannik paper overview 🤣
    To be fair, the rants are on point and the great ideas are uplifted.

  • @idiosinkrazijske.rutine
    @idiosinkrazijske.rutine 3 роки тому

    When we learn about mathematics and physics at university and beyond, symmetries are looked for everywhere, even conservation laws are symmetries, yet in ML people seem hyped learning about this, I have an impression that people in ML research know precisely why these mathematical tools work and sell it with some veil of mysticism ("mathematics of general relativity and quantum field theory" wow really) to younger people with software engineering or computer science background. Just my impression.

  • @priyamdey3298
    @priyamdey3298 3 роки тому +2

    Absolute delight as usual! As a side note, for anyone interested to know (in layman terms) how Gauge CNNs came to be and its possible impacts on the DL as well as (rather even more) on the Physics community, here is a wonderful article elucidating on the same: www.quantamagazine.org/an-idea-from-physics-helps-ai-see-in-higher-dimensions-20200109/

  • @yasserdahou5308
    @yasserdahou5308 3 роки тому

    Amazing one !!! when are you having Schimhuber

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому

    Was non parametric Bayes popular before deep learning? I didn’t know Bayes ever took off given computational complexity.

  • @twist777hz
    @twist777hz 3 роки тому +2

    Can you pls organize your bookshelf it's driving me insane

  • @jana8774
    @jana8774 3 роки тому

    Is there an uncut version of the interview with Max?

  • @jgpeiro
    @jgpeiro 3 роки тому +4

    Hahahah, thanx for QR code😂😂😂

  • @jondor654
    @jondor654 Рік тому

    Anyone care to comment on the reasons for the invariance difference between identifiable world objects or their representation and the world of symbols aka text or numerics

  • @denniscraandijk
    @denniscraandijk 3 роки тому

    Is it me or are Yannics intro clips generated with some sort of lip sync GAN?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  3 роки тому +1

      Yannic recorded that clip from his phone! We stabilised it and removed background. It's defo in GAN territory now 😜

  • @fredericbarbaresco9793
    @fredericbarbaresco9793 3 роки тому

    Interesting video. Max Welling will be keynote speaker at GSI'21 conference, co-organized with SCAI Sorbonne and ELLIS Paris Unit. GSI'21 will be dedicated to "Learning Geometric Structures" with session on "Geometric Deep Learning": www.gsi2021.org

  • @joneps8021
    @joneps8021 3 роки тому

    Can anyone recommend a nice book/script on AI which also contains new developements of the last few years? It seems to be quite an interesting topic.... ;)

  • @MuhsinFatih
    @MuhsinFatih 3 роки тому +1

    never gonna give you up

  • @NextFuckingLevel
    @NextFuckingLevel 3 роки тому +1

    6:37 hhhouwever! 😂

  • @afafssaf925
    @afafssaf925 3 роки тому +4

    WHY ARE ALL THESE SMART PEOPLE SO BUFF!? D:

  • @dru4670
    @dru4670 3 роки тому

    Invite Stephen wolfram on here. He has some amazing ideas about computational irreducibility.

  • @ethanconnelly8794
    @ethanconnelly8794 3 роки тому

    You have to be at the point just before chaos to learn best.
    As Jordan Peterson would say you have to dip a toe into the unknown and then bring that knowledge back just like going and slaying your dragon.
    Quite interesting how this psychological idea is being proven algorithmically.

  • @yunuscobanoglu6136
    @yunuscobanoglu6136 2 роки тому

    whats up with the commenting inbetween just let me watch the talking Part.