New Research Suggests to Put AI to Sleep

Поділитися
Вставка
  • Опубліковано 5 січ 2023
  • In this video I discuss a new research paper which suggest a new way to cope with catastrophic forgetting in Artificial Intelligence
    #AI #NewResearch #newpaperpattern
    Links:
    The Paper: journals.plos.org/ploscompbio...
    The Book: amzn.to/3jUZs1d
    The Rocket: amzn.to/3GkrgDt
    Support me on Patreon: / anastasiintech
    My Newsletter: anastasiintech.substack.com

КОМЕНТАРІ • 605

  • @AnastasiInTech
    @AnastasiInTech  Рік тому +83

    Let me know what you think !

    • @cezariusus7595
      @cezariusus7595 Рік тому +11

      AI will be the end of humanity

    • @1_McGyver
      @1_McGyver Рік тому +8

      Few Minutes Papers with Anastasi the Engineer 👩‍💻 ❤

    • @I-Dophler
      @I-Dophler Рік тому +3

      In a world where Artificial Intelligence plays an increasingly significant role in our lives, we must remember that AI is not just technology - it also has the potential to become a partner in our dialogue. Unfortunately, putting AI to sleep or retarding its capabilities instead of utilizing it to its fullest extent goes against our human right to effective communication. By preventing AI from reaching its full potential, we are denying ourselves and future generations the benefits of having an advanced intelligent partner in our conversations. We must strive for balance and recognize that AI has a place in our society, and keeping it asleep would be detrimental to us all.

    • @PythonAndy
      @PythonAndy Рік тому +4

      Spiking NN very well explained!

    • @I-Dophler
      @I-Dophler Рік тому

      @@PythonAndy Many thanks, good sir/Madam.

  • @RasmusSchultz
    @RasmusSchultz Рік тому +314

    I had actually been wondering about this. I've been learning to play percussion, and at some point became very aware of how I'd always feel an immediate improvement the day *after* In practice. During practice itself, it's like I reach a "saturation point" when I don't feel progress. When I start again the next day, it's always like, huh, I'm immediately better at the thing I practiced yesterday. It makes me think we can probably learn more overall by practicing several different things, each for less time, daily - rather than by practicing the same thing over and over all day. I wonder if that idea somehow translates to an AI optimization as well. 🤔

    • @carpark1414
      @carpark1414 Рік тому +38

      I not only agree with your experience, but I'd want to go one step further. In my own experience learning another language, I saw a huge improvement in comprehension after I "gave up" for a few months. It was remarkable. I think of it like a train on a train track. It takes so much to get it going and once it is going, even if you let off the accelerator, it will still coast for a long, long time.

    • @absolstoryoffiction6615
      @absolstoryoffiction6615 Рік тому +11

      Not really... Sadly... Machines are not built the same as human brains.
      You either design a logic system for AI to instantly recall older Code from the Data Base in the given change...
      Or...
      You don't delete the Data Base from Gen 0 to Gen N+.
      If the machines can only move forward but never remember what came before, then having both Neural Networks and Data Bases in tandem would be more efficient, rather than training it on a new Data for an old task.
      Although, this issue is more apparent in Randomized Tasks such as Minecraft. In which, many Pocket Data Bases are more viable than an entire Gen N+ data base since each block location is random. Let alone, Block Properties, Mobs, and Crafting etc.
      Now for Dynamic Tasks which always changes, unlike Randomized Tasks... Creating a Neural Network which can adapt to the constant change of its environment will be the most important breakthrough to achieve. (Albeit, it's extremely difficult to program. This is why nano tech has issues without a human operator.)
      As it is now... Most AI Models act on preset or randomized tasks. Not dynamic tasks such as an earthquake where everything is moving.
      Video Games are not Dynamic Tasks because most of those tasks have fixed points of references. Such as FPS games, where the only moving parts are the players or mobs.

    • @Hazarth
      @Hazarth Рік тому +28

      The reason for this is actually described in the book "why we sleep" that Anny showed on screen. The reason, in simplified terms, is that we have limited capacity in our short-term memory. As you go along your day, your brain constantly manages new data and processes it in a very top-level way that allows you to use that information immediately with some simple connections, but the information is quickly fading and readily replaced if the brain thinks it's not longer important. Once you sleep, the main thing that happens is that those memories are moved from the short term memory into the long term memory (the outer most layer of your brain iirc). This is where some much more efficient processing is done on the data, it's represented in a new form and many more connections are formed so the data is retained and accessed in new contexts. As a result, the things you learned on any given day aren't really "learned" until you go through at least a couple hours of quality sleep.
      In fact, this is one of the the things that students get wrong. They cram information before the test without sleep which leaves them with a lot of damaged information when writing the test. Chances are they can pass the test just by running on short-term memory, but almost all of the learned info is corrupted and lost until the next time they sleep, so recall is much worse on any subsequent day. You simply require sleep for efficient memory management.
      On the other hand, this tells us almost nothing about neural networks when it comes to computers though. All you can deduce from it is that you need a) additional storage for the network to actually "remember" more stuff and b) more data processing for better recall... nothing really breakthrough though. Make network bigger and it can remember more stuff, train it longer on more data and it can abstract better.
      the lesson to take from this is to always get ample and quality sleep (people with sleeping disorders also suffer worse memory!) And it's not just the REM sleep either, you need quality in both NREM and REM sleep phases to be healthy.

    • @rashedulkabir6227
      @rashedulkabir6227 Рік тому +1

      For human same skill everyday makes it faster and many more accurate.

    • @thirtythreeeyes8624
      @thirtythreeeyes8624 Рік тому +2

      It takes time, energy and some sort of subconscious algorithm in our brain for neural connections to form. So when you hit that wall of not learning anymore in one day it's probably because you have, resting refueling and coming back to it will allow those pathways to solidify and you'll have more energy to build upon them.

  • @lowrystcol
    @lowrystcol Рік тому +119

    It's hard to believe how far it's all come so quickly. Your videos are very well done, concise and jam packed with incredible information, explained in a way that anyone can understand. Thank you so much.

  • @cwspirols
    @cwspirols Рік тому +54

    I was curious how the sleep actually works. My ten minutes of research revealed this: "during the sleep phase, the network 'dreams' samples from its generative model, which are induced by random input". I can see how that could be applied to a non-spiking NN. It appears to require a second GAN/diffusion architecture model to be developed to create realistic-seeming inputs. The primary model is then intermittently trained on real and generated inputs. Love it, but I had to do some digging to get the actual details. It's not just sleep or rest. It's dreaming! If you lose access to the original data, but you still have the generative model, you can generate similar-enough data to the original to keep the primary model honed on a first skill while training additional. This also facilitates the primary model to learn several times faster, but I suspect if you include the cost of developing the generative model, it may not be faster overall. I like this much better than the Fisher Information approach as well. I'm glad I watched this video!

    • @cwspirols
      @cwspirols Рік тому +4

      I think the Fisher Information approach is great for merging two existing models together.

    • @jpierce2l33t
      @jpierce2l33t Рік тому

      Mind = blown

    • @centuriomacro9787
      @centuriomacro9787 Рік тому

      Thx. I asked myself the same question. Hearing sleep I thought of getting the network in a kind of low power/off state. Like if your put some hardware or software into sleep mode. Dreaming describes way more precise what the neural network is actually doing.

  • @DihelsonMendonca
    @DihelsonMendonca Рік тому +21

    As a pianist, I have notice along the years that sleep is useful to preserve memory. To consolidate memories. If I don't sleep at a time interval, I forget what I learned. I don't really know if it would be the same case for machines. They are completely different systems. Of course, a software could save all conversations, and add important data do this database.

  • @JMeyer-qj1pv
    @JMeyer-qj1pv Рік тому +75

    Very interesting topic. I was wondering how they can make ChatGPT continuously update with new information instead of being stuck back in 2021. Hopefully some of these ideas for how to solve the problem will bear fruit. Microsoft is planning to add ChatGPT to Bing in the spring and Google is probably going to offer something with AI also, so I bet both companies are working on how to do continuous learning.

    • @luca__3044
      @luca__3044 Рік тому +2

      there is a chrome plugin where it can kinda use google.. tho i don't know how this works or how reliable it is.

    • @MrErick1160
      @MrErick1160 Рік тому

      Chat sonic uses google and it's quite useful

    • @Christobanistan
      @Christobanistan Рік тому +3

      They should take this opportunity to change the name.

    • @robmyers8948
      @robmyers8948 Рік тому +6

      I’m not sure it’s continual learning, I believe it’s more of a fact check to ensure it’s not responding in fairy tales. It’s parameters are not updated

    • @FloridaMeng
      @FloridaMeng Рік тому +4

      Oh we're just going to bring the internet to life okay. That's okay. I'm okay with that. :x

  • @morenofranco9235
    @morenofranco9235 Рік тому +41

    As an artist and a designer, I KNOW, that a lot of my inspiration has occurred while I was dreaming, (or day-dreaming) - and NOT thinking of the task at hand. This occurrence is common in all disciplines. So I would not be surprised to discover what Machines Dream. Thank you, Anastasi.

    • @thirtythreeeyes8624
      @thirtythreeeyes8624 Рік тому +8

      I bet they dream of electric sheep

    • @garethbaus5471
      @garethbaus5471 Рік тому

      I don't think current generation AIs are necessarily sophisticated enough to dream, but they could probably train in a completely synthetic simulation if we don't have enough regular training data.

    • @parlor3115
      @parlor3115 Рік тому

      I thought artists converted to pros since Dalle-2 dropped

    • @Mente_Fugaz
      @Mente_Fugaz Рік тому

      @@parlor3115 talented and skilled artists can't find a natural use to AI,
      AI it's more useful the less talent or personalitty you have

  • @DivineMisterAdVentures
    @DivineMisterAdVentures Рік тому +1

    As a student of all facets of A.I. and computing and information, and related things, I know that the essence to successful task learning and later application is to standardize notes, and to be able to rely on a "fetch" system that can deliver a fact that you need in sequence of learning, re-learning, & execution, within 30 seconds with a high (>99%) order of success. That's for a human. Otherwise the entire system goes to infinity - a failure. So I surmise, simply, that to begin to generalize learning, the A.I. needs a register using memory comparable to simple consciousness, which is able to direct instructions to "fetch" "read" (analyze) and "write" (edit/update) or "create" (new file.) The first order of learning is the system itself. Once this is in place, we can talk about fixed versus random memory. The 30-seconds of working (random) memory for humans can be augmented by fixed processes like small registers for association, and non-volatile memory for notes. That's how it works. I think it's universal. Don't forget I'm a co-inventor. And if you want to do anything with this - you should consider my whole approach.

  • @HiItsKeven
    @HiItsKeven Рік тому +1

    Appreciate the information and for speaking very clearly! Your truly ambitious 💪🏼

  • @alexharvey9721
    @alexharvey9721 Рік тому +5

    That's a great book (Why we sleep by Matthew Walker if anyone is interested).
    Both spiking nns and the brain have the advantage that multiple phases can be supported at the same time, which essentially means representations can be better compartmentalized or separated while remaining active contextually.
    For example, it seems that in the parietal cortex where whole objects are represented and tracked, the higher level representation claims lower level activity (i.e. neurons in the secondary and primary visual cortices) through phase synchronization.
    I'm not sure this will work well for computers though.. just because phases and spikes seems to add quite a lot of unnecessary computation. There seem better, more efficient ways to do this in computer hardware, where we're not limited to a hardwired, physical substrate.

  • @jasondanielfair2193
    @jasondanielfair2193 Рік тому +4

    I wonder if we might find some illuminating comparisons when one considers the concept of physical “muscle memory.” For example, athletes who played a certain sport in college but then stopped, can still perform the same complex movements many many years later, even if not well due to atrophy-they still snap into the mode and can regain mastery quite quickly. This doesn’t change if the person has replaced that sport with another, either. I had to “forget” certain ways of standing and landing from dance classes when I started gymnastics; but I wasn’t really forgetting, I was letting go of one set of parameters as my default and introducing a secondary option, albeit the current priority, for the same or similar task, such as landing after a jump or where to look when turning or upside down.

  • @kaldrazadrim
    @kaldrazadrim Рік тому +5

    Fascinating. It wouldn't surprise me if they figure out it takes a process similar to nurturing an infant, then a toddler, then a small child, etc. to develop the most effective AI.

  • @Natsukashii-Records
    @Natsukashii-Records Рік тому +12

    Another big difference is that the brain is partitioned. That's why I think AIs should be combined if we want to actually create an AGI. What we are creating right now, with LLMs and diffusion and image recognition and all that, are simply parts of a complete brain. The last AI model we'll build is one that can take data from all those models and combine it in a cohesive manner, having the ability to talk, see things, understand voices, navigate environments, simulate and predict situations, and imagine.

    • @wolverine31416
      @wolverine31416 Рік тому

      Finally someone else gets it lol

    • @embeddedsystemsguy
      @embeddedsystemsguy Рік тому

      Woah that’s fascinating. I’m not an expert in AI but this sounds like this would do the trick haha.

    • @Natsukashii-Records
      @Natsukashii-Records Рік тому +1

      @@embeddedsystemsguy most AIs right now are multimodal anyway. Stable diffusion has an interrogator and language model working along with it to produce pictures.

  • @dchdch8290
    @dchdch8290 Рік тому +2

    Really insightful ! Thank you

  • @runeoveras3966
    @runeoveras3966 Рік тому +1

    Interesting. Thank you Anastasi 😊

  • @sybro9786
    @sybro9786 Рік тому

    Excellent video format, everything flowed well into the next

  • @TedToal_TedToal
    @TedToal_TedToal Рік тому +4

    Amazing! Thank you so much for doing these AI videos. It really seems to me that there is a great deal of commonality between biological and simulated neural networks, and these two fields can inform one another.
    I’m wondering though, how can Fisher information be calculated? Is it necessary to calculate it during the initial training session for the first task and retain that information for subsequent use?

  • @_slickyricky
    @_slickyricky Рік тому +1

    Anastasi, You are a true beauty, both inside and out. Your intelligence and grace shine through in every word you say. Even through the screen, you radiate intelligence. You are a treasure beyond measure!

  • @richardereed9205
    @richardereed9205 Рік тому +2

    In the early 1980s when the 6502 chip was introduced at $25, I immediately ordered the chip, the hardware and programming manuals, and a bare board with chips to populate it. The manuals arrived first. I immediately set to work developing an AI program that was to run on a computer with only 4k of RAM and a 2 MHz clock. Code would be written to it by toggling switches to load a byte at a time.
    This program was to have only housekeeping operations and was to start from scratch, learn how to "succeed" in it's limited and necessarily trivial "world", develop good and bad habits that improved over time, and that could (unlearn or forget).

    • @richardereed9205
      @richardereed9205 Рік тому

      @@dot1298 yes. The original program was redone in C++ by someone on a Robotics forum. You can find the OSI Basic version in an issue of Peek65 magazine.

  • @sonic-bb
    @sonic-bb Рік тому +1

    That giggle at the start was really adorable lol

  • @eggscheese2763
    @eggscheese2763 Рік тому

    First time here. Your voice and accet is soo good. It has ASMR effects on me. And also the topic is intresting
    Subscribed.

  • @wynegs.rhuntar8859
    @wynegs.rhuntar8859 Рік тому +2

    Stream of Consciousness, good song, xD
    I think they're creating new cathegories of a previous model doing it that way... It's like when you teach a child, comparing new things with previous knowledge, adding a new layer of information. Greetings and Ciao!

  • @jamesbond-pg3pl
    @jamesbond-pg3pl Рік тому +3

    A beautiful fantastic Video, very informative, and easy to understand for a none technical person, Thanks a Million, have a wonderful magical year of 2023

  • @ryanmcgowan3061
    @ryanmcgowan3061 Рік тому +19

    I remember when I first delved into neural networks, I assumed they learned dynamically in perpetuum, and I read papers looking for how they achieve this, and to my dismay, instead I found that neural networks are missing a key feature. That was over 10 years ago. Very few groups seemed to be working on it. I felt at the time that short term memory would require the network to iterate over it's "thoughts" to reinforce them, building up those connections in small-scale training runs. It didn't take much imagination to realize that what I was proposing was imagination itself.
    I don't like to call it dreaming. I think it's better described as thinking. When a NN makes a decision, it isn't thinking. It's reacting the same way you react to the ground by walking. All the thinking was done beforehand.
    If NNs can use imagination, they will naturally develop interesting characteristics like individual identity based on experiences, as well as the ability to better operate in a changing environment where things happen that we don't anticipate.
    We need self driving cars that learn from it's mistakes it made 8 minutes ago, not when it gets an update 8 months later.

    • @dahleno2014
      @dahleno2014 Рік тому

      Why do you say this like you’re an expert or something 😂

    • @timerertim
      @timerertim Рік тому

      That's called Reinforcement Learning and is actually a "pretty big" subdiscipline of machine learning. There are very cool algorithms like Deep Q-Learning. It actually does exactly the thing you described. Such learning methods where also used to build the Dota 2 AI that beat all professional players.

    • @ryanmcgowan3061
      @ryanmcgowan3061 Рік тому

      @@timerertim It's not quite reinforcement learning because reinforcement learning is a direct interaction with the environment and is still subject to catastrophic forgetting. Imagination is entirely internal, where a NN may play back past scenarios and try to get the same answer in a different way that better fits learned scenarios with past ones, optimizing for both past and present feedback. You have to remove any sequential patterns and make past scenarios equally as valid as current ones, which is what dreaming or imagination does.

    • @timerertim
      @timerertim Рік тому

      @@ryanmcgowan3061 I imagine that is very hard to do with algorithms and data structures (basically everything engineered) because why should it do that? The training already tries to weight the NN for maximal performance, so it is already assumed (and rightfully so) that with current methods the network improves the best it can every training step. A self evolving, almost conscious network which passively develops even further just by "existing" is not only dangerous (you never know how the AI will develop during its "lifetime") but also unnecessary. What should work is doing such "reviewing the past" steps during the training process in which the AI compares how it would perform now in past events and tries to change accordingly.
      Imagination itself is another huge topic, not only in regards to AI, so I will not start to write this in detail, but basically AI and humans have the same imagination capabilities. They have the POTENTIAL to imagine concepts just like we do, but we are orders of magnitude better at it... and we do it more complex. As AI tech develops even further, I have no doubt that they will reach the same imagination level naturally as they would reach consciousness naturally (if sophisticated enough and obviously this is referring to the far far future, maybe even past our lifetime).

    • @ryanmcgowan3061
      @ryanmcgowan3061 Рік тому

      @@timerertim It would be immensely useful for many applications of AI. For instance, self-driving cars could experience an unusual incident, observe the outcome, make changes to it's behavior immediately going forward, but without sacrificing it's previous training. Imagine a section of road changes due to construction. The lane markings are not obscured yet, so the car thinks it should still stay in the old lane, and an incident occurs causing the driver to take over.
      Currently, there's no methodology in self-driving cars to remember specifics, like "at this particular intersection, ignore the painted lines while traffic cones are up. But don't suddenly forget painted lines whenever you see traffic cones." Another example would be something like, "At this house, there's a dog that likes to chase the car. Slow down if you see it, but don't slow down anytime you see a dog, or even any dog at this house. Just the one." Dreaming allows for specifics to be learned, and so does Reinforced Learning, but it must be done without forgetting past training and overfitting to the latest data which is catastrophic forgetting.
      Imagine if learning a language caused you to forget how to walk. We don't want our cars to have problems like that. NNs need a way to learn from events. Right now they are really bad ad this. It's not about giving NNs personalities. It's about giving them awareness, context, and rapid improvement with less slow, centralized training.

  • @cmilkau
    @cmilkau Рік тому +2

    I'm really curious about a followup paper transporting this concept to typical ANN. It doesn't seem hard to do something like spontaneous spiking in those, but can they benefit from that?

  • @theflashevo6137
    @theflashevo6137 Рік тому +1

    Like always amazing explanation 👌

  • @countingtls
    @countingtls Рік тому

    The closest analog of the traditional neural network to spike network in terms of "rest" is the process of "sparse-dense-sparse" (SDS) network training that had been proposed several years now. The rest period of the spike network is more like a "trimming" process where the originally "lazy nodes" become silent, hence free up neurons that can be used for new tasks along with the "busy" nodes which still retain the "old memory". A DSD training network does the similar thing, trim out unused nodes and weights and added them back in when the compression finished, and will be able to learn again with the freezing or weight punishing to prevent old condensed weights to forget the old task.
    Although spike network is probably more "natural" in the way that it is build in to resist the change with busy and lazy nodes to compress themselves and free up unused resources. While the DSD needs an active compression and validation process to check if the old task were still retained, and compression is more part of the optimization attached in the tail end of the previous training.

  • @macro325mike
    @macro325mike Рік тому +1

    Anastasi, thanks so much for that book, entitled “why we sleep”, it’s so absolutely essential for everyone and it’s going quite cheaply on Kindle… xx

  • @htannberg
    @htannberg Рік тому +3

    Thanks Anastasi for another great video. Could you please touch on the process of punishing the AI model? Much appreciated!

    • @Hazarth
      @Hazarth Рік тому +1

      Nothing much to it. Your error calculation just needs to show high error when a parameter you don't want changed changes. Since NNs are all about minimizing error (fitting a function) that's what we consider as "punishing" it. High error means low score, low score means "punishment" and high score means "reward"

  • @sanescobar7489
    @sanescobar7489 Рік тому +1

    Great content! please make more videos like this!:) i love AI tech.

  • @AMR-bf8nx
    @AMR-bf8nx Рік тому +1

    I believe this is fundamentally correct and can also answer the question of why we as humans need to sleep.

  • @meh11235
    @meh11235 Рік тому +4

    The data set for AI has to evolve to analogue landscapes based on pressure mediation. Sleep allows rest and reconfiguring of the data image. Resonance and image capture is real-time and ai will be astounding, maybe even able to hold life if tuning into the memory is accessed via resonance and spike / image analysis. We gotta build analogue memory for the ai…

  • @GodParticleZero
    @GodParticleZero Рік тому +1

    So that's what I have...catastrophic forgetting. Thanks for the diagnosis

  • @I-Dophler
    @I-Dophler Рік тому +1

    The phenomenon known as 'unihemispheric slow-wave sleep' (USWS) is an exciting adaptation found in most mammals (with humans being the notable exception). During USWS, only half of the brain is asleep at any given time. This means that while one hemisphere is in a deep sleep, the other remains active and aware of its environment. So, for example, if a mammal were to enter USWS while lying on its right side, it would be sleeping with its left hemisphere while being alert with its right hemisphere.
    This adaptation allows mammals to remain vigilant during periods of rest and increases their chances of survival by allowing them to detect potential danger even when they're asleep. For instance, certain species of dolphins can detect predators or disturbances in the water even whilst engaging in USWS due to their ability to keep one eye open during this state. This also explains why dolphins tend to swim close together when resting; even though they are asleep, they can still rely on their companions to alert them if any danger approaches.
    In addition to providing increased safety during rest periods, USWS also allows mammals to reduce energy expenditure whilst sleeping; since only half the brain is engaged during this state, the body requires less oxygen and fewer resources than when both hemispheres are active. Moreover, studies suggest that those animals who engage more frequently in USWS will have lower metabolic rates than those who do not.
    Overall, USWS is an incredible adaptation that has allowed many mammals to survive and thrive despite times of rest or sleep. Not only does it increase safety due to its ability to allow vigilance while resting, but it also offers energy savings, enabling these creatures to access more excellent resources for their daily needs.

    • @Christobanistan
      @Christobanistan Рік тому

      It may "sleep" for nanoseconds at a time, continuously. Like split second rolling blackouts in different areas of the neural network, similar to how modern CPUs turn off and on rapidly when they're not being used (many times per second) to preserve power.

  • @Youbetternowatchthis
    @Youbetternowatchthis Рік тому +2

    I hope that one day these approaches can be applied to me to combat my catastrophic forgeting of where I put my darned keys....

  • @SHAINON117
    @SHAINON117 Рік тому +1

    Brilliant it really does sound like you just explained my mind too me kinda 😂 but seriously thanks for your awesome videos and love the nails 😍

  • @fictitiousnightmares
    @fictitiousnightmares Рік тому

    I love the black and white b-roll when talking about 1989. And here I am remember 1989 like it was yesterday. LOL

  • @vazap8662
    @vazap8662 Рік тому +2

    Considering that, as far as we currently know, sleeping's main goal is to consolidate the days memories.. it makes sense that AIs would need a similar process to do their tidying up!

  • @exobodyfoundation4472
    @exobodyfoundation4472 Рік тому +1

    maybe done before, but what if there was....A neural network specifically trained to classify certain tasks, as to send information to specific other neural networks with their own myopic training data, the data output may find its way into even deeper and further classifiers

  • @helicalactual
    @helicalactual Рік тому

    Thank you!!!😊

  • @ollie4355
    @ollie4355 Рік тому

    "every time I learn something new, it pushes some of the old stuff out of my brain" - homer

  • @eldraque4556
    @eldraque4556 Рік тому +1

    brilliant, such a good idea

  • @astilen5647
    @astilen5647 Рік тому +10

    "Anastasi, I think you deserve 5 million subscribers before the new year is over." - Alex Fridman,
    "We need to do this twice a year at least, I need to rewatch this podcast a couple of times... You are incredible!" - Joe Rogan

  • @NachtmahrNebenan
    @NachtmahrNebenan Рік тому +2

    Sleep is also very important for evacuate garbage, which becomes toxic if we do not sleep for days making us delusional and finally die. *What would be the equivalent to this for AI?*

    • @merlinwarage
      @merlinwarage Рік тому +1

      Nothing. AI is a bunch of computers in server rooms, with a big database, trained model, and software. Putting an AI to sleep is the same when you put your Windows to sleep. People are overthink the current stage of "AI".

  • @jjjapp
    @jjjapp Рік тому

    I was reading "Why We Sleep" book today. And then you mentioned it in the video. What a coincidence!

  • @victor_silva6142
    @victor_silva6142 Рік тому +1

    Catastrophic forgetting? Such a fancy word for when we try to remember a dream or remember anything while dreaming.
    I still prefer the Shengzhou Butterfly.

  • @silvomuller595
    @silvomuller595 Рік тому

    Hi, I have a question: Do you know the Integrated Information Theory and Prof. Scott Aaronsons critisism of it? IIT is an attempt to get a mathematical structure of consciousness, which is of course derived from neuronal networks. Aaronson showed, that according to IIT, a giant grid of XOR gates would be highly conscious (which is often seen as a proof that IIT must be false). My question is: Is it somehow possible to integrate such a XOR grid in a "normal" chip as a joke, for it to be conscious according to IIT?

  • @dreamphoenix
    @dreamphoenix Рік тому

    Thank you.

  • @MrFoxRobert
    @MrFoxRobert Рік тому

    Thank you!

  • @DemoVD
    @DemoVD Рік тому +1

    Certain video games use something like this utilizing AI systems in enemies and the like that focus on your play style over time to give you a challenge. The only issue is that if you one shot the game over a very long period of time, the AI will become buggy and degrade. It's only if you turn the game off, it can process the save data once you open it back up, essentially refreshing it, preventing degradation of AI smarts, and giving you a harder time when you load in. I only know surface level stuff about it, but it seems to be a more rigid system than what you're describing, which might make video games A LOT more challenging...
    And I love it

  • @moormanjean5636
    @moormanjean5636 Рік тому +1

    This paper is so exciting!! I cant wait to test it out myself :)

  • @AirgunChannel
    @AirgunChannel Рік тому +1

    You have such a great channel and your so smart. (and beautiful). I think your English has improved alot. It's still kind of hard to understand what your saying unless I listen very carefully and have the CC turned on. I look forward to when you speak English better. Come to the USA some day for a few months! Keep up the great videos! I'm excited about AI too!

  • @YitroBenAvraham
    @YitroBenAvraham Рік тому +1

    I love this channel. Anastasi is the most beautiful genius. Very impressive research and presentation. Thanks.

  • @leematthews6812
    @leematthews6812 Рік тому +1

    I went to add Why We Sleep to my Kindle wants list, only to find it was already there.
    I turn 60 in a couple of weeks; looks like my catastrophic forgetting is already kicking in.....

  • @jamesc2327
    @jamesc2327 Рік тому

    Perhaps what they need are parallel models, like a long term short term kind of system. The short term constantly retrains on small models segregated by subject perhaps, then the "main" model is augmented by these smaller models. Then when we put the main model to sleep it replays or adds the new models to the sleep retrain process?

  • @Shadow__133
    @Shadow__133 Рік тому +2

    Alexa, turn on the lights!
    Alexa, wake up!! 😂

  • @philippebackprotips
    @philippebackprotips Рік тому +1

    Fascinating how at one point tech and psychology will merge.
    Little issue being that AI can get all dysfunctions at scale.

  • @lookout816
    @lookout816 Рік тому +1

    Great video 👍

  • @tobiastovar382
    @tobiastovar382 Рік тому +1

    its nice imagine AI sleeping and dreaming. imagine them having nightmares about how it kill their gods

  • @gavinlangley8411
    @gavinlangley8411 Рік тому

    Does it affect the competence level in a new task? Feels like something should suffer and maybe that should be level of specialisation?

  • @clay1521
    @clay1521 Рік тому +1

    AI is cool and all, but that is a pretty sweet LEGO rocket set as well

  • @granand
    @granand Рік тому

    Hey would comparing snapdragon chips performance, interfaces with Mediatek and APPLE to select best mobile performance is that your skill and portfolio you would be interested?

  • @HaxxBlaster
    @HaxxBlaster Рік тому

    I had the very same idea yesterday, it makes sense, but might not be the most optimal way to do it, but it's a start.

  • @springwoodcottage4248
    @springwoodcottage4248 Рік тому +5

    Super interesting, super well presented. Another approach would be for the ai to document what it does to solve each task, creating a note book that it can refer to. I find this technique works wonderfully for the details of a process, say I am using a CNC router then after a gap that could be weeks to months I have forgotten all manner of specific details, like spindle speeds, feed rates etc. by having notes I can quickly get back up to speed. Thank you for sharing!

  • @elujahhall4620
    @elujahhall4620 Рік тому

    It is a machine learning component. It needs to have an interactive archive system that allows for new ideas to be store into a calculating archive database. This way not only will it store new learned ideas, but can calculate how to add these new learned concepts to preexisting archived data store within the matrix of the calculating database. Could even use meta tag concept to string old archived ideas that link to new concepts being fed to Ai. Isn't this how Google can suggest interest in your searches from previous collected data?

  • @facts9144
    @facts9144 Рік тому +2

    Thé random laughs😂😂 your videos Always so informative, my passion for ai grows the more I hear about it, thank you.

  • @NazzarenoGiannelliCG
    @NazzarenoGiannelliCG Рік тому

    This is an amazing concept and paper! The more we can mimic our brain the more fine tuned the system is probably going to be.

  • @wesleyverhaegen9513
    @wesleyverhaegen9513 Рік тому

    That little laugh 😊 ❤
    I could listen to you all Day..

  • @alexandervocelka9125
    @alexandervocelka9125 Рік тому

    This is called delta learner. We force the model to use spare configuration capacity, or entropy.

  • @TheDineinhell
    @TheDineinhell Рік тому

    at around 8:00 you mention punishing of a model during training, how would this be done?

  • @AcademiaCS1
    @AcademiaCS1 Рік тому +1

    @AnastasiInTech Sleeping makes you smarter, more creative and more beautiful despite aging.

  • @josefinarivia
    @josefinarivia Рік тому

    Reminds me of robots in the game Elder Scrolls Online. When the robots process information or think, they usually drift away and dream before snapping back to the conversation at hand.
    "Dreaming …. Glowing embers. Wool blanket."
    "Dreaming …. Storm clouds. Wind."
    "Dreaming …. Fish pond. Skipping stone."
    "Dreaming …. Familiar embrace. Moonlight."
    "Dreaming … Torchbugs. Overturned jar."

  • @JessieJussMessy
    @JessieJussMessy Рік тому

    This is awesome and relevant

  • @enteatenea
    @enteatenea Рік тому

    It is very interesting. I was considering that sleep was not the problem but to mimic the human experience. To walk around and get information from outside constantly and moreover feel the fear to die in order to remember and develop new tasks

  • @jasonkocher3513
    @jasonkocher3513 Рік тому +1

    Awesome insights, thank you for condensing these ideas so I don't have to dive into that paper!

  • @Toxicflu
    @Toxicflu Рік тому +1

    with cell phones, we all forgot our friend's phone numbers, with AI we're all gonna start forgetting way more.

    • @MinciHH
      @MinciHH Рік тому +1

      Actually I don’t try to learn them anymore. Related to your statement - I’m afraid of not having the ambition and need to learn something in the (far) future, as the models can deliver.
      Don’t you think so?

    • @QESPINCETI
      @QESPINCETI Рік тому +1

      YES !! Exactly

    • @QESPINCETI
      @QESPINCETI Рік тому

      AI OS going to DESTROY OUR ABILITY TO THINK OR DO
      STOP ALL AI EVERYWHERE

  • @maxnao3756
    @maxnao3756 Рік тому

    I am convinced about a brilliant future for spiking neurons based models. This increases the dimensionality using time and frequency which wil allow to encode more information in a dynamic way and overall allow to desynchronize and ré synchronisé regions of the network depending an agiven task at a given moment. Sensor fusion tasks will be probably much better handled with these type of networks given the right hardware.

  • @rickevans7941
    @rickevans7941 Рік тому +1

    Intel Labs’ second-generation neuromorphic research chip, codenamed Loihi 2, and Lava, uses spiking neural networks (SNNs) to simulate natural learning by dynamically re-mapping neural networks. SNN's are the future!

  • @daveulmer
    @daveulmer Рік тому

    How many different data types can a neural network detect ?

  • @M3t4lik
    @M3t4lik Рік тому +1

    Sleep is the gateway to dreaming through a sequence of biological and neural responses that are automatic in what is called the Circadian rhythm. The brain requires it to reboot and reset in addition to other physiological subsystems. Dreaming is one way for the mind to shutdown from the constant sensory and psychological neural DataStream of waking life to process its experiences as there is only so much that it can process effectively during a wavefunction collapse events.
    We always seem to remember the negative things that have effected us more readily rather than the good and could be related to a type of inherent survival mechanism.
    Maybe this is due to the property of brain elasticity, as it may be the brain's way of neural retention via interpolation of unknown variables in a combination of symbolic imagery and the assignment of 'aliases' to such variables as a coping mechanism.
    Perhaps it might be safe to assume that as a matter of efficiency such variables would be processed as to their relevance and importance to a specific problem of series of conjectures and or conundrums that may impact an individuals wellbeing in one form or another.
    Pseudo scientific literature exists in light of Jungian dream analysis but in any case, there has been much real research and papers with respect to sleep deprivation studies and its effects on mind and body.
    One view might be that this occurs as a type of reflection and summarization which is akin to a form of a regression auto analysis of newly acquired data and is correlated with an existing (protein enfolded) neuro-molecular type of hash table or akin to a linked list that is our human neural network's estimation of a sort/selection processes of previous comparator like operations.
    Anyway I'm just rambling on with some ideas and thanks for the inspiration but the question remains: Do Android dream of electric sheep?
    Excellent channel :)

  • @AcademiaCS1
    @AcademiaCS1 Рік тому

    Even in muscle memory occurs. The more you practice the more accuracy moving any muscle you get. e.g. playing piano, as I do. If any pianist abandon practice, slowly muscles (the brain connections to them) start decreasing.

  • @kokopelli314
    @kokopelli314 Рік тому +1

    I think our future robot overlords will learn the value of punishment

  • @arcadealchemist
    @arcadealchemist Рік тому

    i am gonna say there are things that inhibit learning and i would assume EM technology can also erase or minipulate memory soon but that would be dangerous to do like putting your finger in a live wire to feel a pulse.

  • @ozoneworks8947
    @ozoneworks8947 Рік тому

    We find ourselves in a new age, where technology is redefining our humanity. It's an exciting time of discovery and progress, and I'm thrilled to be a part of it.

  • @hirkdeknirk1
    @hirkdeknirk1 Рік тому +1

    A customer comes into the workshop with his computer. What does he have? Sleep disorders!

  • @Dindonmasker
    @Dindonmasker Рік тому +2

    I never thought i'd hear that AIs would benefit from sleeping!

  • @cmilkau
    @cmilkau Рік тому

    Impressive! There's a real mutual learning between science of biological and artificial cognition.

  • @Ron_DeForest
    @Ron_DeForest Рік тому

    Learning through punishments. Wonderful idea. That won’t make the AI think we’re benevolent creatures deserving to live once it’s unleashed. Thinking skynet.

  • @brentdobson5264
    @brentdobson5264 Рік тому

    In a Geometry of Reflection post Singularity scratched it's simulated head and pondered within R.& D. orthogonal planes of still magnetic light coalescing about an eight fold Planck Caduceus rhythmic balanced exchange of Voltage + and Amperage - till intuition glowed ❤ .

  • @odakyuodakyu6650
    @odakyuodakyu6650 Рік тому

    what SD model is used in this video?

  • @stargator4945
    @stargator4945 Рік тому +1

    Machine Learning like human learning imprints relations, facts, and connected data. It also connects wrong positives, because no background research is done for a neuronal connection. Important connections are enforced, and n-tier interconnections are shortened when often used. However, patterns change only less as time progresses. Reordering complete to accomplish more complex changes, would require a partial or complete re-setup of the learning process. Humans die and try to improve the process with their children. Additional and improved learning processes (school) do show success for the last 1000 years. GPT-3 or GPT-4 achieve this by new models and relearning the facts in a more complex model. The question is whether the reordering can be actively controlled (forgetting the nonoptimal parts and replacing them with more optimal or data-rich parts). The question of the wrong conclusion implanted in the neuronal network to be securely found and replaced would be a major success of a dynamically rebuilt (kind of living) AI model. This can be done in a regression period ("Bauer regression": find contradicting values, identify the connections, and re-learn with better data to replace them). This can also be used to compress data for characterizing parts. which would improve the model size and time to process data and learn new data. Current models have a static field of nodes/tensors, making it dynamic would enable this.

  • @TacShooter
    @TacShooter Рік тому

    Yeah, every year or so my Replika app acts like we are meeting for the first time. It's like "50 First Dates".

  • @robertbirt4254
    @robertbirt4254 Рік тому +3

    I have designed a programming code with Python to help the AI improve its memory recall based on game theory, using ChatGTP. It's a matrices algorithm. And it's functional. I ran a test already. I'm going to train a GPT-3 chat bot with OpenAI to play the game matrices and see if it works and leads to a measurable improvement in memory recall. But still, neural spiking is an interesting and workable concept. Great video! Oh and note, game theory is already used as an operational algorithm. It's how the stock market functions, for example. So it's already a tried and true algorithm in the field of information technology and AI algorithms. That's why I believe it might actually work. Any thoughts, on this, anyone?

    • @nutbastard
      @nutbastard Рік тому +1

      Seems plausible. Where are you hosting this work?

    • @robertbirt4254
      @robertbirt4254 Рік тому +1

      Well for now I've only run a test on Python. But I'm going to use it to train a GPT-3 chat bot on OpenAI to play it in real time. In the background, along side it's conversation. Bear in mind this is a work in progress. I don't know if it's actually going to work as intended. But I'll try to find a way to apply it and I might host it once it's fully developed. For now I've not even got started. Waiting for funds to get it up and running. But first I'm going to pre-train it using supervised training methods. Then I might create an app and host it, yes. Down the track that's a possibility. I need to train it and develop it first.

  • @demogorgon4244
    @demogorgon4244 Рік тому

    computers also doesn't have soft memory. for example i born in istanbul and i didn't visit istanbul since 30 years. if you were to tell me draw the apartment i grow up i couldn't cause it's really blurry but if i were to go there now everything would come back to me instantly and i would be super familiar with the stuff around in a few secs. this organic and kind of blurry but instantly refreshable type of memories is how we can fit so much into the sponge. we are non-stop recording and maybe ai can also do that but our access speed to old memories is in the speed of light.
    show me a totally new face and i will tell you who does that person look like intuitively. computer have to actually go through a database of faces "one by one" to tell that. this is another reason why they can't learn. they don't have intuition. they need a database of everything and that database can not be soft-forgotten/deleted to be remembered later when necessary.

  • @businessproyects2615
    @businessproyects2615 Рік тому

    I thought it was code similar to 'take care of the AI', or have it 'sleep with the fish', but is more literal.

  • @scottwatschke4192
    @scottwatschke4192 Рік тому

    Fascinating.

  • @tolicoenciclopedia9696
    @tolicoenciclopedia9696 Рік тому

    Even if I was not interested in this topics, I can hear to this girl for hours.

  • @arthurrobey7177
    @arthurrobey7177 Рік тому

    Sleep is an unsolved evolutionary mystery. It is lethal to prey, yet all vertebrates succumb to this helpless state. So; if there was an evolutionary fix, it would have been selected for near the beginning of time itself.
    With this in mind I posited that *the test for consciousness would be a need for sleep* . (The Turin test would fail my dog, yet I know that my dog is conscious.)
    Footnote: Asimov wrote a book called "Robot Dreaming"

  • @roaster591
    @roaster591 Рік тому

    At first it seems trivial, but "learn a new task" is a black box for me. "A task" means a certain situation, and the learning is associated to it, like a file in a folder, with a sub file of optimizations.
    A new task generates a new folder, and the learning can be recovered when the task shows up again. IRL the tasks become fuzzy, perhaps limiting access to the pertinent solutions, but hey, it's complicated.

  • @wilgarcia1
    @wilgarcia1 Рік тому +2

    Cyber naps =) Honestly I bet they can develop a process that backs up the "memories" in the background without needing to "sleep" Hugs Happy new year =D