Neuromorphic computing with emerging memory devices

Поділитися
Вставка
  • Опубліковано 3 чер 2024
  • This Plenary speech was delivered by Prof. Daniele Ielmini (Politecnico Di Milano) during the first edition of Artificial Intelligence International Conference that was held in Barcelona on November 21-23 of 2018.

КОМЕНТАРІ • 53

  • @tjeanneret
    @tjeanneret 4 роки тому +32

    I can't believe that only a few people where present for this presentation... Thank you for publishing it.

  • @greencoder1594
    @greencoder1594 4 роки тому +55

    *Personal Notes*
    [00:00] Introduction of Speaker
    [01:54] START of Content
    [05:13] CMOS Transistor Frequency Scaled with Decreased Size Node
    [06:54] Von Neumann Architecture uses Power to for regular communication between CPU and Memory
    - In contrast, within the brain memory and computation are co-located
    [09:06] Neuromorphic hardware might utilize "in-memory computing" and emerging semiconductor memory
    [09:30] Non-volatile memory (brain-like long-term memory)
    - resistance switching memory
    - phase change memory
    - magnetic memory
    - ferroelectric memory
    [10:24] RRAM (Resistive Random Access Memory)
    - dielectric between two electrodes
    - resistance changes to a high-conductance state once voltage applied exceeds a certain threshold (due to movement of structural defects within the dielectric)
    - can be used to connect to neurons with a dynamic weight (high voltages strengthen the synapse, opposite voltage weakens)
    [12:23] STDP (Spike-Timing Dependent Plasticity)
    - relative delay between post-synaptic neuron and pre-synaptic neuron
    - t = t_post - t_pre
    - long-term potentiation when t>0 (neuromorphic agent assumes causality from correlation)
    - long-term depression when t
    We simulated an unsuprevised spiking neuronal network with STDP and did pretty good. It hasn't been built in hardware yet tough.
    [38:08]
    If you say we could port a bee brain to a chip, why not a human brain?
    ->
    Members of the human brain project told me, there is a total lack of understanding, how the brain works.
    The human brain appears to be the most complex machine in the world.
    Improvement in lithography might offer chips with a neuronal complexity similar to the human brain.
    It's a waste of time, because we don't know what kind of operating system or mechanism we have to adopt, to make it work
    We might target very small brains and only some few distinct features of a brain
    - like the sensory motor system of a bee
    - or object detection and navigation of the ant
    - ...so very simple brains and functions might be feasible within the next decade
    [40:59]
    How do your examples compare to classical examples with respect to savings in time and energy?
    ->
    All currently developed neuromorphic hardware uses CMOS technology for diodes, transistors, capacitors and so on.
    A classical transistor network with similar capabilities would require far more space on the chip.
    Thus those new memory types are essential if you want to safe energy and complexity like the brain does.
    [43:54]
    How can you adapt to changes in the architecture, like when the count or wiring of neurons is supposed to change?
    ->
    You can design your system in a hybrid way to integrate RRAM flexibly into your classical CMOS hardware
    [46:09]
    Are you trying to develop a device dedicated for AI only or as a (more general?) peripheral device that can replace current GPU acceleration?
    ->
    We are not competing with GPUs, we are targeting a new type of computation. Replacing a GPU with such a network wouldn't make any sense.
    In-memory logic does not seem to be very interesting, considering high cycle times and high energy consumption.
    But using RRAM (or similar technology) to emulate neurons can save you a lot of energy and space on the chip.
    [47:53]
    In-memory computing could have a great impact, because you have a kind-of filter to know what you really have to compute when changing a value in a neuromorphic database for example.
    The input is the result _and_ the behavior at the same time, ...that could be the reason for this big change in energy management
    ->
    Yeah, I totally agree.
    If you compute within the memory, you don't have to move the data from the memory to the processor.
    [49:09]

    • @paquitagallego6171
      @paquitagallego6171 3 роки тому +1

      Thanks...

    • @everettpiper5564
      @everettpiper5564 3 роки тому +1

      I really appreciate these notes. And your final remark is spot on. Very intriguing. You might be interested in a channel called Dynamic Field Theory. Anyhoo, appreciate the insights.

    • @ashwanisirohi
      @ashwanisirohi 2 роки тому +1

      What more I can say for what you did....Thanks

    • @celestialmedia2280
      @celestialmedia2280 2 роки тому +1

      Thanks for your osm effort 👍

    • @leosmi1
      @leosmi1 2 роки тому +1

      Thank you

  • @JoshuaSalazarMejia
    @JoshuaSalazarMejia 3 роки тому +9

    You saved the day by recording and uploading he presentation. Amazing topic. Thanks!

  • @totality10001
    @totality10001 4 роки тому +4

    Brilliant lecture. Thank you!

  • @Atrocyte
    @Atrocyte 3 роки тому +4

    Thank you for this fascinating lecture and sharing it!

  • @feuxdartificeppp
    @feuxdartificeppp 4 роки тому +3

    Great video! Thank you!

  • @holdenmcgroin8917
    @holdenmcgroin8917 5 років тому +4

    Thanks for sharing, very informative presentation

  • @HavenInTheWood
    @HavenInTheWood 3 місяці тому

    This is great, I'll be watching again!

  • @ashwanisirohi
    @ashwanisirohi 2 роки тому

    Talk was good but questions were better. I like the Prof. honesty and smooth answering.

  • @pradhumnkanase8381
    @pradhumnkanase8381 3 роки тому

    Thank you!

  • @Artpsychee
    @Artpsychee Рік тому

    thank you sharing your insights

  • @ashwanisirohi
    @ashwanisirohi 2 роки тому +2

    Thanks for making and uploading the video in such a nice manner. Very comfortable to follow the contents of the talk.

  • @viswanathgowd4060
    @viswanathgowd4060 2 роки тому

    Thanks for sharing this.

  • @entyropy3262
    @entyropy3262 2 роки тому

    Thanks, really interesting.

  • @GWAIHIRKV
    @GWAIHIRKV 3 роки тому +3

    So are we saying this is another form of memristor?

  • @teamsalvation
    @teamsalvation 3 роки тому +5

    Although this is well over my head, I am excited by what is being said, or at least what I think is being said and shown.
    The brain is both a memory and a processor. What they've been able to accomplish is to recreate "the brain" (for talking purposes, I know it's not literal).
    Again, keeping this simple for me; if I were using TensorFlow and running the session on a GPU, I would instead run this session on "the brain" created by Dr. Lelmini? The initial input data set is still gathered in the traditional sense or would we be moving data directly into "The Brain" from the data capture HW (e.g. video camera data stream) and then kicking off the session by some HW interrupt once some pre-defined amount of raw data has been transferred?
    This is all really cool stuff!!
    Can wait to replace my GPUs with NPUs (Neuromorphic Processing Units) :-) with PCI-E 6 x16 (64 GT/s)

    • @jacobscrackers98
      @jacobscrackers98 3 роки тому +1

      I would try to email him if I were you. I doubt anyone is looking at UA-cam comments.

  • @silberlinie
    @silberlinie 2 роки тому

    Eine absolut geniale Sache.
    Obwohl der Bericht hier aus dem Jahr 2018 ist.
    Ist denn das Projekt weitergekommen?
    Was gibt es denn in der Zwischenzeit zu
    berichten?
    Ist Politecnico Di Milano noch an der
    Sache dran?

  • @SaiBekit
    @SaiBekit 3 роки тому +1

    Does anyone understand the difference between this and Neurogrid's architecture?

  • @matthewlove2346
    @matthewlove2346 3 роки тому +1

    Is there a paper that goes into more depth that I could read? And if so where can I access it?

    • @cedricvillani8502
      @cedricvillani8502 3 роки тому

      IEEE has everything you could ever want and updated, become a member

    • @cedricvillani8502
      @cedricvillani8502 3 роки тому

      New memory device that just came out! The Nvidia NGX Monkey Brain, comes pretrained with a few muscle memory actions such as, throwing poop at a fan, and getting sexually aroused at the sight of a banana.

  • @moizahmed8053
    @moizahmed8053 4 роки тому +4

    I want to try these "toy examples" myself... Is there a way to get hands on the RRAM modules/ICs?

    • @davidtro1186
      @davidtro1186 3 роки тому +2

      knowm.org/ similar memristor technology made in USA

  • @styx1272
    @styx1272 4 роки тому +3

    too complicated for me ; glad others found it inlightening .

  • @jaimepatino1645
    @jaimepatino1645 Рік тому

    And that . [18] Here is wisdom. Let him that hath understanding count the : for it is the number of a man; and his number is Six hundred threescore and six.

  • @onetruekeeper
    @onetruekeeper 3 роки тому +1

    This could be simulated using holographic circuits.

    • @brian5735
      @brian5735 19 днів тому

      Yeah i thought of that. Photons would be much more efficient in quantum computer. Less noise and decoherence

    • @brian5735
      @brian5735 19 днів тому

      Just etch the gates

  • @ONDANOTA
    @ONDANOTA 5 років тому +2

    is this faster than quantum computers? does it scale exponentially or better?

    • @ONDANOTA
      @ONDANOTA 5 років тому +2

      auto-answer after googling, yes it is faster than QC's

    • @mrpr93cool
      @mrpr93cool 5 років тому +2

      @@ONDANOTA faster in what?

    • @anywallsocket
      @anywallsocket 4 роки тому +2

      You have to realize what you're asking here. QC is just computing at the nano level, as opposed to micro level, and taking advantage of entanglement / tunneling rather than attempting to avoid it. In principle, one is not "faster" than the other as both operations can unfold at the rate of electromagnetic wave impulses (the fastest you can get). It's just a matter of what physical medium is catalyzing this computational operation. In-Memory computing is a technique for organizing that medium, so as to eliminate the latency between data storage and data manipulation. It's a different ball-game altogether, and in principle, both QC and CC can be organized via this In-Memory technique.

    • @ShakmeteMalik
      @ShakmeteMalik 3 роки тому +1

      @@anywallsocket Correct me if I am mistaken, but is it not the case that QC aims to eliminate network latency altogether by utilising Spooky Action at a Distance?

    • @anywallsocket
      @anywallsocket 3 роки тому +2

      @@ShakmeteMalik Depends what you mean by "network latency". For the most part QC is employed for processing information, not storing it - since quantized info is usually too delicate to store. The whole point of In Memory computing is combining the processing and storing, which therefore works much better for classical computing.

  • @davids3116
    @davids3116 2 роки тому

    need to create an ego operating system for AI to improve it's capabilties

  • @Nathouuuutheone
    @Nathouuuutheone 3 роки тому

    20:57

  • @demej00
    @demej00 2 роки тому

    Tough to pour your soul into research for only 10 people.

  • @Ositos_dad
    @Ositos_dad 3 місяці тому

    No le entiendo ni turca.