Liquid Neural Networks, A New Idea That Allows AI To Learn Even After Training

Поділитися
Вставка
  • Опубліковано 8 лип 2023
  • Daniela Rus currently serves as the Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Rus is a renowned Andrew (1956) and Erna Viterbi professor at CSAIL. With a passion for advancing the field of robotics, Rus has made significant contributions to areas such as autonomous vehicles, swarm robotics, and distributed algorithms. Her research and leadership have earned her numerous accolades, establishing her as a prominent figure in the world of robotics and artificial intelligence.
    Subscribe to FORBES: ua-cam.com/users/Forbes?s...
    Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
    account.forbes.com/membership...
    Stay Connected
    Forbes newsletters: newsletters.editorial.forbes.com
    Forbes on Facebook: forbes
    Forbes Video on Twitter: / forbes
    Forbes Video on Instagram: / forbes
    More From Forbes: forbes.com
    Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.
  • Наука та технологія

КОМЕНТАРІ • 280

  • @LindsayHiebert
    @LindsayHiebert 9 місяців тому +120

    Kudos Daniela Rus and Computer Science and Artificial Intelligence Laboratory (CSAIL) team at MIT! Excellent work and innovation!

  • @JChen7
    @JChen7 9 місяців тому +157

    Getting strong “Any sufficiently advanced technology is indistinguishable from magic” vibes right now thanks to the wizards at MIT. Wish I was smart and dedicated enough to learn what's happening here. Looks amazing.

    • @mattjohnson1775
      @mattjohnson1775 9 місяців тому +4

      I agree 100%. Magik is very real....but its divination/ witchcraft. Ask Gordie Rose, he doesnt sugarcoat depending who hes speaking to .

    • @CrazyAssDrumma
      @CrazyAssDrumma 9 місяців тому +4

      Me too my friend. The good news is there ireally is no point now. Given how much time and effort it would take you (and me). By the time you got there, we'll have SuperAI anyways lol 😂

    • @LabGecko
      @LabGecko 9 місяців тому +21

      The better news is that anyone can learn it. There are enough resources now that anyone can dive into AI if that is your passion, and learn from the ground up. I did, and now I've trained several AI (technically machine learning algorithms) and look forward to getting my hands on (or building) a liquid network soon!
      For reference, I started on Google's AI courses, but SentDex here on UA-cam does a great job with several of his tutorials too. Also 3Blue1Brown does a great job of explaining anything math related, and he has a series on machine learning.

    • @AnExplorer1000
      @AnExplorer1000 9 місяців тому

      @@LabGecko Do you write a blog about your projects or have public GitHub projects?

    • @LabGecko
      @LabGecko 9 місяців тому +1

      @@AnExplorer1000 hadn't thought of that. I'm retired with PTSD though, so my projects are pretty hit and miss, might not be something ppl would want to follow.

  • @justinlloyd3
    @justinlloyd3 9 місяців тому +9

    No paper link no other links. No name for the paper. Thanks Forbes.

    • @LabGecko
      @LabGecko 9 місяців тому +2

      I know, while I thank them for posting the vid, not crediting and posting sources in this day and age of journalism is pathetic

    • @shubhamdhapola5447
      @shubhamdhapola5447 9 місяців тому +4

      Maybe, you're pre-occupied with some higher priority task
      Let me help you out. Here you go fella -
      arxiv.org/pdf/2006.04439.pdf
      P. S. -
      It just required typing literally 25 characters (including whitespace)
      "liquid networks mit csail"
      into the search bar of the search engine you prefer and then investigating the top 2-3 links for the exact match.
      As far as giving "credit and recognition" goes - that's utterly unprofessional & unethical from Forbes' side.

  • @thearchersb
    @thearchersb 9 місяців тому +11

    I don't understand anything but I completely agree with her.

  • @philforrence
    @philforrence 9 місяців тому +55

    Curious how the field will receive this. Let's get her on Lex Fridman!

    • @The-Singularity-M87
      @The-Singularity-M87 9 місяців тому +6

      Before I saw your comment, I had already saw this video and posted a link on One of lex Freeman's videos. Since I don't know how to email him directly but that's the same thing I thought.
      Great minds right 👍

    • @philforrence
      @philforrence 9 місяців тому +4

      @@The-Singularity-M87 you and me, myron. The greatest of minds!

    • @electrolove9538
      @electrolove9538 9 місяців тому +1

      That autonomous driving was one of Lex's projects right?

    • @katherandefy
      @katherandefy 9 місяців тому +1

      Gosh yeah I would love to hear more than we can get in a short talk.

    • @chrisf1600
      @chrisf1600 9 місяців тому +5

      Oh god please no. "So, uhhhh, can a liquid network ever fall in love ?"

  • @thomasfreund-programandoha961
    @thomasfreund-programandoha961 9 місяців тому +5

    Wow! This is amazing. Thanks for sharing

  • @concernedspectator
    @concernedspectator 9 місяців тому +14

    Absolutely incredible.

  • @TheAiCat1
    @TheAiCat1 9 місяців тому +5

    Amazing!! Thank you ❤

  • @dreamphoenix
    @dreamphoenix 9 місяців тому +4

    Fascinating. Thank you.

  • @energyeve2152
    @energyeve2152 9 місяців тому +1

    very cool. I look forward to the many applications this can be used in. Thanks for sharing.

  • @the_curious1
    @the_curious1 9 місяців тому +3

    Very interesting and a good presentation, thank you!

  • @KevonLindenberg
    @KevonLindenberg 9 місяців тому +53

    This is the innovation in AI that is going to change our world beyond recognition.

    • @Gabcikovo
      @Gabcikovo 9 місяців тому

      Yes

    • @Gabcikovo
      @Gabcikovo 9 місяців тому +1

      9:38

    • @Gabcikovo
      @Gabcikovo 9 місяців тому

      10:00

    • @semitope
      @semitope 9 місяців тому

      It's not AI. Or at least it's not intelligent. It's really fancy real world data processing. Like feeding an algorithm data on the stock market and it doing it's best to make predictions. Except this time they do it with real world images, human generated information etc. It's good they are able to get computers to produce meaningful calculations from real world data but it's to be expected. How do you get a computer to navigate the real world? Feed it a bunch of images and make it capable of combining all of that data to process what the camera is capturing. Next you make it flexible enough to handle data outside what is feed and hope to minimize errors
      Thinking of, the idiots who thought it was ok to let computers mess around with the stock market better watch what people do on there with these new pieces of code.

    • @hufficag
      @hufficag 9 місяців тому

      Yes

  • @user-qw1rx1dq6n
    @user-qw1rx1dq6n 9 місяців тому +2

    Incredible that in my limited understanding this seems to perform almost the same process in the sense that the effects are the same as self attention but it’s just more direct about it.

  • @mahmga1
    @mahmga1 9 місяців тому +2

    Unbelievably ground breaking from lay view. I was just saying the other day that there had to be a better way that redefines the NN

  • @francisdelacruz6439
    @francisdelacruz6439 9 місяців тому +9

    Really important work, does it scale to 1000x neurons? Cooperative networks?

  • @md.adnannabib2066
    @md.adnannabib2066 9 місяців тому

    that's the most impressive thing i have ever seen,kudos to the researcher

  • @jeanbernardmbarga3265
    @jeanbernardmbarga3265 9 місяців тому +2

    Great presentation

  • @crackrule
    @crackrule 9 місяців тому +68

    This will make learning faster and better networks,. the vision to detect object seems to be much more clear. Hope this will be out soon. Or may be we need to push this to Tensorflow or Pytorch soon for easy accessibility with wide frameworks. The more experiments get performed using this, the better outcomes can be seen in real world.

    • @arlogodfrey1508
      @arlogodfrey1508 9 місяців тому +3

      Move fast break things let's go

    • @Supreme_Lobster
      @Supreme_Lobster 9 місяців тому +3

      I already did some testing with ego localization (finding your relative coordinates at every frame by watching a video) and it seemed promising

    • @whannabi
      @whannabi 9 місяців тому

      ​@@arlogodfrey1508 depends what relies on the things you wanna break...

    • @crackrule
      @crackrule 9 місяців тому +1

      @@Supreme_Lobster can you share github link?

    • @Supreme_Lobster
      @Supreme_Lobster 9 місяців тому

      @@crackrule search for the repo CfC_LiquidNetwork-DeepVO . I use the same username as on here. Cant post the url because it gets deleted

  • @Viewpoint314
    @Viewpoint314 9 місяців тому +46

    What is the difference between neural network and liquid neural network?
    Unlike traditional neural networks that only learn during the training phase, the liquid neural net's parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data.Apr 19, 2023

    • @michaelm6928
      @michaelm6928 9 місяців тому +1

      This shouldn’t give the smooth results as she showed? It’s learning on the validation set?

    • @vaakdemandante8772
      @vaakdemandante8772 9 місяців тому +6

      How is network's ability to change its parameters over time connected with it being explainable? Isn't learning "after learning" still learning? It's a lot of claims for really not a lot of substance. Where's the link to the publication? Did anybody replicated those findings? Looks like a bunch of PR fluff really.

    • @katherandefy
      @katherandefy 9 місяців тому +1

      @@vaakdemandante8772
      It’s because they simplified the structure and added a tree which makes the parameters easier for us and the machine to focus data that previously computed from one large block.

    • @rpcruz
      @rpcruz 8 місяців тому

      The parameters do NOT change after training. The neural network output is a derivative - that is, the outputs are relative to each other - the neural network focus is on how the new input influences the output relative to the previous one.

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 6 місяців тому

      @@rpcruz Reread the comment

  • @user-qw1rx1dq6n
    @user-qw1rx1dq6n 9 місяців тому +4

    It is unbelievable what they managed to do with 20000 parameters I must learn this technique fast

  • @web3global
    @web3global 8 місяців тому

    WOW! Amazing, thanks for sharing Forbes! 🚀

  • @thorvaldspear
    @thorvaldspear 9 місяців тому +54

    It's interesting how this team has been talking about this invention for over a year, and yet has failed to gather significant attention despite the revolutionary qualities of liquid neural networks. Perhaps there is a catch that they are not telling us about?

    • @sidneymonteiro3670
      @sidneymonteiro3670 9 місяців тому +3

      MARKETING!
      The commercial products capture the headlines.
      It has nothing to do with having a catch.

    • @BB-uy4bb
      @BB-uy4bb 9 місяців тому +29

      ​​@@sidneymonteiro3670ah, all the team members of the big labs/companies always scan/read through all the new papers, if this was so much better than what we have currently than everyone would use it. Science in this area does not need Marketing at all

    • @jeremykothe2847
      @jeremykothe2847 9 місяців тому +3

      @@sidneymonteiro3670 So why has nothing been commercialised?

    • @thad1300
      @thad1300 9 місяців тому +19

      @@jeremykothe2847 you guys are talking as if all the current existing work was in research phase a year ago lol. Even the pathway from "Attention is All You Need" to recent LLMs took a few years and that's really fast.
      The recent explosion in AI is really hardware driven, the realization that as long as you just add more compute power, your models become a lot better. But the fundamental research on AI/ML was decades ago. We're hitting a hardware compute wall soon and further improvements will be made on the algorithmic side.

    • @NeonTooth
      @NeonTooth 9 місяців тому +5

      ​I think there are much better ways of conceptualizing neural networks in development that will drastically lower the compute necessary to run them. And liquid neurons are certainly an example of this. I mean you can run it on a raspberry pi. That's the kind of thing that will make models far more accessible to open source folks, as well as strip larger tech companies of their monopolies over the models.

  • @benealbrook1544
    @benealbrook1544 9 місяців тому +4

    Amazing this is revolutionary

  • @SilenceOnPS4
    @SilenceOnPS4 9 місяців тому +3

    Can someone please inform me of the advantages of LNNs (if it can be used) on diffusion models such as Stable Diffusion, DALL-E and Midjourney?
    If I am right, these diffusion models use DNNs?

  • @education.online_frevryone
    @education.online_frevryone 8 місяців тому

    I was wondering a few days ago about black boxes and now we have liquid neural networks. Amazing 😍

  • @PixelPulse168
    @PixelPulse168 9 місяців тому

    Thanks for sharing

  • @MathPhysicsEngineering
    @MathPhysicsEngineering 9 місяців тому +2

    No link for the original paper in the description?

  • @superuser8636
    @superuser8636 9 місяців тому +10

    CSAIL is the premiere AI lab at MIT! I know because I worked there developing their AI infrastructure 😂 I really dig this experiment and talk

  • @LearnAINow
    @LearnAINow 9 місяців тому +7

    How does this network react when confronted with outside noise that directly affects the trained task? How does this compare with the other forms of networks? Thank you. I’d love to know more

    • @revanthchouhan3068
      @revanthchouhan3068 2 місяці тому

      Same doubt @LearnAINow. If anyone’s into this, please share some knowledge.

  • @joeriben
    @joeriben 9 місяців тому +13

    Amazing. Moving human targets can be tracked by drones independent of place and season. Isn't that what we all have been waiting for.

    • @daveloomis
      @daveloomis 9 місяців тому

      😅

    • @ps3301
      @ps3301 9 місяців тому

      The perfect terminator machine which can track you all day long. The utopia we all looking for

  • @SuperMaDBrothers
    @SuperMaDBrothers 9 місяців тому

    9:40 love this cut

  • @vladyslavkorenyak872
    @vladyslavkorenyak872 9 місяців тому +27

    This feels like the brain is trying new neurons to better improve its functioning!

    • @omop5922
      @omop5922 9 місяців тому +2

      This video was obv not for u mate.

    • @jakebrowning2373
      @jakebrowning2373 9 місяців тому +2

      ​@@omop5922who's it for?

    • @technolus5742
      @technolus5742 9 місяців тому +1

      Exactly. Changing the internal configuration of the neurons during training seems to allow for more efficient and powerful models. This is the kind of fumdqmental breakthroughs that this field needs in order to continue making progress beyond larger and larger models.

  • @coalkey8019
    @coalkey8019 9 місяців тому +1

    Wow. That is absolutely huge.

  • @j.d.4697
    @j.d.4697 9 місяців тому +26

    Wow from 100 000 to 19 neurons!
    Can those liquid neurons be similarly scaled??

    • @ricosrealm
      @ricosrealm 9 місяців тому +4

      I don't think that's what she meant. She said those are 19 liquid networks, which likely comprise of thousands of neurons as well.

    • @alaapdhall8541
      @alaapdhall8541 9 місяців тому +15

      @ricosrealm Look at the Fig she showed. Essentially she replaced the FC Layer having 100k neurons spread throughout layers with her 19 liquid neurons that are divided into 3 parts, 12 inter neurons, 6 command neurons, and 1 final motor neuron. Not 19 diff networks. She calls these networks with liquid neurons as liquid networks.
      That's what is impressive that instead of 100k neurons she used only 19 neurons. However, as mentioned in video each neuron is a series of diff. equations instead of typical f(a(wx+b)). Each neurons computation is more complex but still better than 100k also she has not replaced CNN and only replaced FC Layer.
      Ideally a FCNN model like Yolo should still be better as they don't have FC layers as such, but in Attention based transformers this can be useful as they often use FC layers.

    • @AnExplorer1000
      @AnExplorer1000 9 місяців тому +6

      @@alaapdhall8541 Hi there! I don't comment very often, but I have a couple of questions for you if you don't mind. You seem very knowledgeable about these things. As for myself, I love mathematics and work on it whenever I can, but I don't know anything about LLNs, ML, or AI. How did you reach a point where you understood all these things you're commenting about? Did you perhaps go through Andrew Ng's course or something similar? What's your educational background? Thanks in advance.

    • @GreenCowsGames
      @GreenCowsGames 9 місяців тому +11

      @@AnExplorer1000 If you binge a bunch of youtube video's on deep learning you can learn a lot. Then the jump to reading papers will not be that much. Channels like yannic kilcher are really good. If you want to implement then there are lots of resources on the pytorch website itself.

    • @han_0210
      @han_0210 9 місяців тому +2

      As she mentioned in the video , you can see the liquid time constant equation for each neuron and she also mentioned about how they change the wiring of the network

  • @aboucard93
    @aboucard93 9 місяців тому

    This is amazing

  • @bhargavsai2449
    @bhargavsai2449 9 місяців тому

    Excellent blown away

  • @mariomariovitiviti
    @mariomariovitiviti 8 місяців тому

    This is huge.
    Robust under data distribution variance. By targeting more task relevant features. This means less data necessary for continuous learning which is the only and super costly way to keep a model in production.

  • @imranbaloch3414
    @imranbaloch3414 9 місяців тому

    Excellent achievement!

  • @eduardomanotas7403
    @eduardomanotas7403 9 місяців тому +1

    Hey, Any link to the paper or git repository?

  • @ItzGanked
    @ItzGanked 9 місяців тому

    good for alignment if the arch works well

  • @stanleyashiwel7047
    @stanleyashiwel7047 9 місяців тому

    Thank you

  • @hellucination9905
    @hellucination9905 5 місяців тому

    I'm no expert, just curious, but it seems to me like (1) a form of continous self-reflexivity regarding the specific neuronal changes produced within the liquid network in relation to the produced output effects; and (2) a mapping of the causal relationships of the temporalized relations between these internal neuronal reconfigurations and the specific output effects produced by them.

  • @dairop3220
    @dairop3220 9 місяців тому +1

    Is it fundamentally different from the NEAT algorithm ?

  • @muhonbhuiyan8687
    @muhonbhuiyan8687 9 місяців тому +1

    What is the problem if you use Sonar instead of Camera 📷, as Input? 🤔

  • @kkviks
    @kkviks 9 місяців тому

    Interesting!

  • @erikdong
    @erikdong 9 місяців тому

    Bravo!

  • @ophthojooeileyecirclehisha4917
    @ophthojooeileyecirclehisha4917 9 місяців тому

    thank you

  • @Kugelschrei
    @Kugelschrei 9 місяців тому +3

    So basically liquid networks are neural networks with differntial equations instead of "standard" sigmoid activation functions?

  • @Jukau
    @Jukau 8 місяців тому +1

    Isn't that another huge step in the direction of AGI?

  • @pytebyte
    @pytebyte 9 місяців тому +1

    Wondering why in the first driving example the camera input stream is quite noisy but when they switch to the liquid network its smooths. Anyway. Interesting work.

    • @LabGecko
      @LabGecko 9 місяців тому

      First, a few assumptions: 1) I think they probably developed the driving in-house so it isn't likely to have the same richness of data as something like Tesla or Waymo. 2) I doubt they're employing the same level of computational power as Tesla, Google, et al for this project. Haven't read their paper on the topic, so this is completely off-the-cuff, and take it as such.
      Given that, I think the first is grainy simply because the model's data is noisier than it possibly had to be, possibly not finely-tuned, and so it has some inherent bias issues that make it pay attention to more details than it needs. As for the liquid net, that's simply the nature of derivative math. They (tend to) smooth out lower level formulas, so it makes sense to me that the image recognition result is going to be more gradients than pixel-sharp black and white decisions.

    • @pytebyte
      @pytebyte 9 місяців тому +1

      ​@@LabGecko Thank you for the answer. Maybe I get it completely wrong but when I read "camera input stream" I assume we are talking about untouched data coming directly from camera of the car. So whatever Model is processing this data in the next step gets the same kind of quality. In their presentation it gives me the feeling they wanted to show more noise in the attention map and that's why they added some extra noise to the camera input feed.

    • @LabGecko
      @LabGecko 9 місяців тому

      @@pytebyte good chance you're right, good point. I certainly can't be sure, not being there myself. :D

    • @rheedan
      @rheedan 9 місяців тому +3

      If I understand it correctly, the noise in the attention map isn't noise from the camera feed. The noisy bright parts of the image represent the attention weights of the model. In other words, the bright pixels represent areas the model thinks are important for making its predictions. The problem with the CNN is it pays attention to lots of things that humans wouldn't pay attention to, while the liquid network has a tighter focus, and pays attention in a way that make sense to us.

  • @yogiwp_
    @yogiwp_ 8 місяців тому

    This seems like a bigger breakthrough than whatever else on the AI news in the past couple of months?

  • @johnniefujita
    @johnniefujita 9 місяців тому +1

    I learned about liquid neural network something like 3 years ago... but i just couldn't find any base model to implement it... does anyone know where to find any repositoty with code to implement it.

    • @Tbone913
      @Tbone913 9 місяців тому +2

      repo of the inventor

  • @prilep5
    @prilep5 9 місяців тому +3

    Imagine Lewis Hamilton training an A.I.bot only if computer can scan his brain and eye movements to learn his decision making

  • @broyojo
    @broyojo 9 місяців тому +19

    I would have liked to see a transformer model comparison, seeing as it is the current state of the art for many AI problems

    • @michaelm6928
      @michaelm6928 9 місяців тому +5

      You can probably make the transformer “liquid”

    • @DS-nv2ni
      @DS-nv2ni 9 місяців тому +13

      You cannot make a Transformer "Liquid", embeddings are not compatible with this approach.
      Regarding Transformers being the SOTA for AI problems, I'm not sure about that, they work great for NLP problems, and generative content creation, but those only overlap with a fraction of the issues that are important to solve through AI, and not even the most important ones.
      On top of that, the results are not good enough, Transformers have no causality to start with, and even if Liquid Networks seem to have causality, *they can't really "understand", a device that can represent causality doesn't mean it has understanding of it (like when you program a computer), so Transformers are two steps behind what we need, and Liquid Networks seem to be a step forward in that direction, still they can't solve the problems at which Transformers excel to.
      The step about "understanding" we are missing, it's more than a step, it's a *long run, and doesn't look close at all, probably decades from now, at the moment AI is mostly hype unfortunately.
      EDIT: Typos.

    • @skadday
      @skadday 9 місяців тому +1

      @@DS-nv2ni you dont have a single clue what you are talking about

    • @DS-nv2ni
      @DS-nv2ni 9 місяців тому +13

      @@skadday I think I have a clue and an informed opinion, after spending twenty years as researcher and engineer on AI systems. On the other hand, I think that someone that like you jumps on a topic, saying that others "can't understand", without even pointing out why in a reasonable way, it's indeed the type of person that doesn't understand the topic but has some strong bias to defend.
      I can't even image how I may have triggered you, perhaps the part about AI being hype, just because I had previous experience of people deranging on that.
      I hope for you is not the case and you have some valid point to make actually, otherwise I suggest you to find a better way to spend your time.

    • @skadday
      @skadday 9 місяців тому

      @@DS-nv2ni you clearly dont even have an argument

  • @GBlunted
    @GBlunted 9 місяців тому +1

    No link to her paper?? That's messed up...

  • @sydneyrenee7432
    @sydneyrenee7432 5 місяців тому

    When is this going to be used for NLP and AGI?

  • @Asha-td7bm
    @Asha-td7bm 9 місяців тому

    Amazin

  • @PeterMoueza
    @PeterMoueza 9 місяців тому +2

    7:20 causal

  • @The-Martian73
    @The-Martian73 9 місяців тому +2

    The thrive of ai is exponential, meaning it is an industrial revolution, I am not freaked out tho, things really are and will be going naturally as expected !!

  • @SaiyanGokuGohan
    @SaiyanGokuGohan 9 місяців тому +2

    Neuromorphic computing is where it’s at, using spiking neural networks.

    • @chrisf1600
      @chrisf1600 9 місяців тому

      Great comment. It's striking that none of the current ANN approaches uses "spikes" of activation. That's totally unlike how our brains work. Presumably, evolution uses spiking neurons for a reason. I wonder if the AI industry is heading down yet another blind alley ?

  • @sumansaha295
    @sumansaha295 9 місяців тому +5

    Hmm I thought the hype had died down on these. Hopefully something good comes out of it so people don't need server farms to do AI research.

  • @MrChaluliss
    @MrChaluliss 9 місяців тому +2

    How y'all gonna not link the papers relevant to this talk?

  • @balakrishnaprabhunallendra999
    @balakrishnaprabhunallendra999 9 місяців тому +10

    She should have been given an opportunity/ a facility to sit down & present her matter in question!

    • @DistortedV12
      @DistortedV12 9 місяців тому +1

      She needs a chair for sure

    • @fog3911
      @fog3911 9 місяців тому +2

      @@DistortedV12 aw man

    • @raphaelcardoso7927
      @raphaelcardoso7927 9 місяців тому +3

      maybe she was offered a chair and denied. we all know that to stand in a presentation makes it easier to retain attention

    • @gaetanomontante5161
      @gaetanomontante5161 9 місяців тому +1

      My dear friend, that is irrelevant. I am glad she stood tall, very tall, when presenting us with such an innovation that has the potential to truly change many of the "old" ways we look at things. I am a journeyman but I was totally awed at the ability of this new approach, liquid neurons, to deliver effective solutions to many problems. I want to kiss her own liquid neurons with true Human love.
      One thing seems a little strange at the time of my writing: Despite having been there 10,502 view, I witness only 257 likes, including mine, and ONLY 38 previous comments, and, may I add, most of them being perfunctory and at least one being totally inane.

    • @escesc1
      @escesc1 9 місяців тому

      I doubt she was not offered a chair. She simply preferred to stand up :D

  • @alirezagoudarzi1915
    @alirezagoudarzi1915 9 місяців тому +2

    Amini and Hasani , Two Iranians are leading this project. it amazes me how these boys are changing the world !!👏👏

  • @tallwaters9708
    @tallwaters9708 9 місяців тому

    But, just because that first car is focused on e.g. the side of the road (which is perhaps a heuristic visualization anyway), that's not bad. What if, for example, there's a kid on the side of the road, I'd want the network to be on the lookout for that!

  • @greyowlaudio
    @greyowlaudio 9 місяців тому +1

    The entire field of AI is at risk with companies now pay-walling off their data and/or charging obscene amounts to use it. That's a major short-term spectre that will need to be dealt with.

  • @kipling1957
    @kipling1957 9 місяців тому

    Relevance realization

  • @guten5221
    @guten5221 8 місяців тому

    Please make something like skynet

  • @Viewpoint314
    @Viewpoint314 9 місяців тому +1

    What is liquid neural network?
    Liquid Neural Networks: Definition, Applications ...
    A Liquid Neural Network is a time-continuous Recurrent Neural Network (RNN) that processes data sequentially, keeps the memory of past inputs, adjusts its behaviors based on new inputs, and can handle variable-length inputs to enhance the task-understanding capabilities of NNs.May 31, 2023

  • @testales
    @testales 9 місяців тому

    It's a little bit of cheating to give one NN a noisy camera input stream and the other a clear stream, isn't it? ;-) Either way, I'm looking forward to some implementations of this.

  • @FrankAbagnale1
    @FrankAbagnale1 9 місяців тому

    How do I implement this tensorflow pytorch etc as a programmer? My brain fried at the math part.

  • @Sooyush
    @Sooyush 9 місяців тому

    Daniela & Team❤❤❤

  • @GT-tj1qg
    @GT-tj1qg 9 місяців тому

    This talk seems a bit off. Any of that testing she showed could be completely unfair and we would have no way to know without looking at her data. How many neurons did they give the deep nets vs their liquid nets? Did they use the same training data for both? Did they choose an unusually small dataset to make the deep net underperform?

  • @Voltlighter
    @Voltlighter 9 місяців тому +2

    So they work more similarly to real neurons then?

  • @ApteraEV2024
    @ApteraEV2024 9 місяців тому

  • @MrChaluliss
    @MrChaluliss 9 місяців тому +7

    Kind of strange just how quickly she's going through these slides, like what the heck else is so important that this needs to be rushed? This seems like a significant breakthrough, maybe I am over reacting, but I get the sense this is one of those big steps forward that may make a big difference in enabling AI to be used in a variety of problem cases.

    • @technolus5742
      @technolus5742 9 місяців тому +4

      She likely needs to stay within the allocated time for her talk. While this seems like a breakthrough, this is only a talk.

  • @andybrice2711
    @andybrice2711 9 місяців тому +1

    Surely it's entirely reasonable that an autonomous vehicle would be paying attention to bushes at the side of the road? There could be potential hazards obscured by those bushes. Like people or animals behind them.

    • @GT-tj1qg
      @GT-tj1qg 9 місяців тому +1

      In an advanced system, perhaps that would be a very good feature to include. But I suspect the models they were demonstrating here were just designed for lane-keeping (finding the road and staying on it)

  • @sc-uk5xg
    @sc-uk5xg 8 місяців тому

    Just remember MIT was already working on this last year maybe longer. Don't let them have your biometrics. They will own you and everything you think you own. It will be as easy as hitting delete on a keyboard to totally erase you.

  • @nias2631
    @nias2631 9 місяців тому +1

    Are these based upon Liquid State Machines or entirely different?

    • @exmachina767
      @exmachina767 9 місяців тому +2

      According to ChatGPT: “Liquid neural networks (Liquid NNs) and liquid-state machines (LSMs) are closely related concepts, often used interchangeably or as variations of the same idea. Both Liquid NNs and LSMs emphasize the utilization of continuous dynamics and interaction to process information.
      Liquid-state machines were introduced as a type of recurrent neural network inspired by the behavior of liquid matter. In LSMs, the computational units, often called "liquid neurons," have continuous activation dynamics governed by nonlinear differential equations. These units interact with each other through dense and recurrent connections, forming a liquid-like medium. The dynamics of the liquid neurons allow for the computation of temporal patterns and the processing of time-varying information.
      The term "liquid neural networks" is often used to refer to a broader class of unconventional neural network architectures that share similarities with liquid-state machines. While LSMs specifically emphasize the liquid metaphor and dynamics, liquid neural networks can encompass a wider range of architectures that incorporate liquid-like properties.
      In essence, liquid neural networks and liquid-state machines are closely related concepts that aim to harness the power of continuous dynamics and interaction in neural computation. The distinction between them may lie in the specific architectural variations, training algorithms, or implementation details, but they share the common goal of utilizing liquid-like properties for information processing.”

    • @nias2631
      @nias2631 9 місяців тому

      ​@@exmachina767So it's in the reservoir computing family. LSMs have been around since 2002 or so. If this group is rebranding it, I guess I will have to go through their paper and see what is so different.

  • @Aldraz
    @Aldraz 9 місяців тому +3

    I mean this is cool and all, but can it be applied to language models?

    • @LabGecko
      @LabGecko 9 місяців тому +1

      Of course. It's just a different dataset. The neurons structure / math just allows it to learn post-training, which should be a definite advantage for LMs

    • @Aldraz
      @Aldraz 9 місяців тому +1

      @@LabGecko I am not so sure about that, transformers are very different. Even if it would work, it may not work as efficiently.

    • @LabGecko
      @LabGecko 9 місяців тому +1

      @@Aldraz We're talking about predicting language tokens right? To me they're just a gradient of a list of sounds with a statistical chance of being used based on its neighbors, and image data is effectively grayscale gradient with a statistical chance of being useful based on what is around it. Am I missing a piece?

    • @Aldraz
      @Aldraz 9 місяців тому +1

      @@LabGecko I guess you are correct, but with my limited understanding you wouldn't be able to easily switch and use transformers as before. Because most transformers are not RNN based. But I could be wrong.

    • @clray123
      @clray123 9 місяців тому +1

      @@LabGecko The current transformer-based algorithms process discrete sequences of tokens. CNNs were tried in the beginning and they struggled with modelling language in which there are strong time/causal dependencies between tokens at different distances in the sequence that CNNs can't capture well. RNNs did ok in principle, but they did not scale because they could not be trained on an entire sequence of tokens in parallel like transformers can today. I have no idea whether Liquid NNs suffer from the same problem, but the comparisons to LSTM and RNN do not bode well.

  • @Glowbox3D
    @Glowbox3D 9 місяців тому

    Stupid question: If Elon and his team weren't aware of this method, and then became aware of it already years in development on their own systems, would they potentially re-route their own models, or switch them out totally to use a new method like LNN? Or are they too far in on their own models, they wouldn't dare touch a new system? I would assume, if a *clearly* better technology comes around, innovators are sort of 'forced' to make the change as well?

    • @TimothyOBrien6
      @TimothyOBrien6 9 місяців тому

      They will switch to use this. Their system is modular enough that they can swap out the black box neural network with another that has the same inputs and outputs, and it shouldn't be very hard.

    • @GT-tj1qg
      @GT-tj1qg 9 місяців тому +1

      ​@@TimothyOBrien6 Where did you learn that, I wonder? Everything I've read would indicate that Tesla has dedicated vast computational resources to their existing deep neural network system and to discard it would waste a significant financial investment.

    • @GT-tj1qg
      @GT-tj1qg 9 місяців тому +1

      Glowbox, I'm not sure this is what the creators are claiming it to be. I suspect it has more limitations than they are letting on.

    • @technolus5742
      @technolus5742 9 місяців тому

      ​@@GT-tj1qg My guess is that they will be forced to change if a better route becomes apparent. Continuing to sink money into something that doesn't work very well would be a worse alternative.
      Besides the data they have collected and their current model can be used to train the new model, avoiding the issue of having to start from scratch.

  • @blengi
    @blengi 9 місяців тому

    what's the killer app using this versus AI products like fsd, gpt4,midjourney, alphafold etc that are already changing the world?

    • @samaBR_85
      @samaBR_85 9 місяців тому

      if it gives good results and uses less energy, its already killer!

    • @blengi
      @blengi 9 місяців тому

      ​@@samaBR_85 so beyond self aggrandizing claims the market must be already implementing this self evidently superior technology in some awesomely popular application or 20 that I can download or read drooling reviews about - What are they?

    • @GT-tj1qg
      @GT-tj1qg 9 місяців тому

      I don't know the killer app as you describe it, but this is a clue to finding it: the main difference of liquid nets is that they try to find fundamental relationships, rather than statistical clusters. This has the benefit of being less noisy and more consistent, but at the cost of potentially learning a completely wrong solution to the problem.

    • @gpt-jcommentbot4759
      @gpt-jcommentbot4759 6 місяців тому

      @@blengi There is no "app". AI takes a while to actually get recognized in the market. Just take a look at GPT-3, it was widely known in ML but not really talked about outside of it. Besides, apps are probably not gonna recognize this and are just going to use the same generic architecture. Over and over again until something truly revolutionary arrives, and they will use that over and over again.

  • @redblue2644
    @redblue2644 9 місяців тому +2

    Did she say Liquid Networks solutions adapt better because the equations are in effect less complex and so they focus on less?

    • @LabGecko
      @LabGecko 9 місяців тому +1

      That isn't what I heard. My understanding is that they adapt better because part of the current data gets re-introduced on a continuous basis, but is smoothed out by the derivative functions. But I'm open to being corrected.

  • @jackbauer322
    @jackbauer322 6 місяців тому

    yeah well basically they mimic focusing like we do and are robust to context change ... at last !

  • @Gabcikovo
    @Gabcikovo 9 місяців тому

    8:26

  • @sachinknight19
    @sachinknight19 9 місяців тому

    ❤❤❤

  • @technowey
    @technowey 9 місяців тому +1

    Neural nets that learn after training is *not* a new idea. I have a book about that, with the algorithms, that I bought in 2015.
    If I had access to my library now, I’d post the title.
    I’ll watch this video to see if these are similar algorithms.
    I’m skeptical about the claims about “causality.” Even adaptive networks find patterns in data that might show causality, however, they will also find correlations that are *not* necessarily have a causal relationship.

    • @technolus5742
      @technolus5742 9 місяців тому

      Looking at those attention graphs, it does seem to do well regarding causality, sifting better through the noise.

    • @jebprime
      @jebprime 9 місяців тому

      It creates a representation of the environment that should converge to an equilibrium/ some sort of pattern unless new input changes it's internal representation of the environment.
      I think that's what they use to help their notion of causality.

    • @georgetrench2809
      @georgetrench2809 8 місяців тому

      The thing is that with the incredible number of neurons and all the interactions that take place between them, it is very difficult, if possible at all, to fathom how the transformer network arrived at its conclusions. This, whereas with the liquid neural network this all beoomes quite evident....

  • @Madayano
    @Madayano 8 місяців тому

    👍

  • @lostpianist
    @lostpianist 9 місяців тому +5

    This seems like a natural consequence of improved tech rather than a 'new idea', but all discovery is serendipity, really, anyway. Cool.

    • @LabGecko
      @LabGecko 9 місяців тому +12

      No, their method of manipulating the math is groundbreaking. Current models need _billions_ of neurons to do things like what OpenAI has done on ChatGPT, and this model does the same with *_19!?_* Seriously groundbreaking.

    • @kayakexcursions5570
      @kayakexcursions5570 9 місяців тому +3

      I agree. Nothing to see here.

    • @f.jideament
      @f.jideament 9 місяців тому +1

      ​@@LabGeckohow do you know their efficiency and precision is the same? Checked and compared the data for every possible problem? What is the definition of better here?

  • @stevenesposito9305
    @stevenesposito9305 9 місяців тому

    Interesting…

  • @salehisabeyki4275
    @salehisabeyki4275 7 місяців тому

    1:12

  • @NeonTooth
    @NeonTooth 9 місяців тому

    This is cool and all but I recommend watching the original talk by Ramin Hasani. The salience map she shows for the traditional neuron is being made intentionally bad by introducing noise into the input image, whereas the liquid neuron example is not affected by the noise. Slightly dishonest representation of the results

  • @ps3301
    @ps3301 9 місяців тому +7

    There are no simple math demonstrations of this model

  • @greatsol2444
    @greatsol2444 9 місяців тому +1

    Exciting times, we’re living in…

  • @katherandefy
    @katherandefy 9 місяців тому

    Hopefully YT does not delete the paper link for this idea since the platform hosting these talks does not advertise the work directly but is neutral or that is my assumptions…
    See link in my reply to myself here.

  • @thienthetyga3462
    @thienthetyga3462 8 місяців тому

    I for one welcome our AI overloads

  • @emreon3160
    @emreon3160 9 місяців тому

    Try chocolate AI, eye i sir! 😂

  • @StephenRoseDuo
    @StephenRoseDuo 5 місяців тому

    Isn't this from ~4 years ago?

  • @dag410
    @dag410 9 місяців тому

    🎉

  • @krox477
    @krox477 9 місяців тому +1

    What is the "liquid" here

    • @LabGecko
      @LabGecko 9 місяців тому

      If I had to guess, it's the derivational smoothing of equations handling input

    • @shubhamdhapola5447
      @shubhamdhapola5447 9 місяців тому +2

      It's the ability of the network to learn new patterns during inference on the real-world, test data. Canonical way, for traditonal networks is to perform all the "learning" during the training phase, and become rigid/static once it has been sufficiently trained.
      Another aspect of it being "liquid" is that it can handle variable length time-series data.
      Link to the paper :
      arxiv.org/pdf/2006.04439.pdf

    • @krox477
      @krox477 9 місяців тому

      ​@@shubhamdhapola5447thanks I'll definately learn about it