A History of Reinforcement Learning

Поділитися
Вставка
  • Опубліковано 25 лис 2024

КОМЕНТАРІ • 136

  • @rickandelon9374
    @rickandelon9374 3 місяці тому +46

    Hands down the best AI history channel in the world

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +4

      @@rickandelon9374 THANKYOU … no top ten amazing things this week here :)

    • @nikos.1644
      @nikos.1644 3 місяці тому +2

      Agreed.

  • @TheLoneCone
    @TheLoneCone 3 місяці тому +26

    Seeing this video at 466 views currently and shocked it doesn’t have hundreds of thousands if not millions. Awesome video

    • @Luis-qe8el
      @Luis-qe8el 3 місяці тому +4

      Second that, this vibe of food for thought is awesome to me!!keep going!!

    • @ArtOfTheProblem
      @ArtOfTheProblem  Місяць тому

      I messed up something with my upload, algo did not share it :( ... yet

  • @ncolmt
    @ncolmt 3 місяці тому +35

    the way you introduce the REAL AI to the world, Nice job

  • @ArtOfTheProblem
    @ArtOfTheProblem  3 місяці тому +28

    I hope you enjoy this video, please let me know what you think below. 👇
    STAY TUNED & SUBSCRIBE: Next video on REASONING
    FULL AI series: ua-cam.com/play/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ.html
    Thanks Jane Street for sponsoring. They are hiring people interested in ML: jane-st.co/ml
    SUPPORT AOP: www.patreon.com/artoftheproblem

  • @dnlch
    @dnlch 3 місяці тому +3

    I can't believe I just rewatched all your videos and then you've just released another one.
    what a treasure

  • @belibem
    @belibem 3 місяці тому +9

    Seems like reinforcement learning's been on a wild trip since forever, but the way Brit breaks it down? It's like he's got a secret map of the RL universe. He makes the crazy journey from old-school ideas to today's stuff actually make sense. It's like watching history unfold, but you know, without falling asleep!

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +4

      @@belibem :) it was indeed a huge mess to untangle … notice I cut all the model free detours

  • @jonathonreed2417
    @jonathonreed2417 3 місяці тому +10

    Another great video. It's super interesting to see the DeepMind is attempting to figure out how much real world learning vs simulated learning is optimal while LLM researchers are simultaneously asking questions about the use of "synthetic data", naively (if the "synthetic data" approach proves successful at scale) it seems to vaguely point towards a further generalization in the machine learning field. I think a great follow video to this one would be about multi model models and maybe at the end discuss the idea of synthesizing this robotic action model with something like chatgpt, or maybe not just spitballing.
    EDIT: just read your pinned comment, seems like your already a few steps ahead of me on this, not surprised

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +3

      love this....thanks for sharing your thinking it helps

  • @quirinschweigert7794
    @quirinschweigert7794 3 місяці тому +4

    this is an awesome overview! loved every second of it. Would have expected that this is at 1M+ views

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      @@quirinschweigert7794 thanks I worked super hard on this one, please help me share :)

  • @princetonpoh4637
    @princetonpoh4637 3 місяці тому +10

    I really liked the historical perspective on how RL started. It helps stair-step my way up to modern day concepts :)

  • @CharlesVanNoland
    @CharlesVanNoland 3 місяці тому +2

    General intelligence that doesn't need to be trained offline in a simulation or on a static dataset to improve it entails an algorithm that learns from experience, in real time. With the exception of maybe functioning as a perception module, backprop-training basically has no use for such a thing - and even as a perception module to interpret vision/audition it will be limited to seeing and hearing only stuff that it has actually been trained on. What we need is a whole new paradigm, a whole new learning algorithm, that learns from scratch, from experience, how to do everything. It's right around the corner, we're right there, and it seems like everyone is still distracted by what will one day be the "compute-expensive antique brute-force method of making a computer learn", which we know today as backpropagation. Predictive learning algorithms are what my money is on. It's just a matter of working in the ability for them to learn behaviors, and rely on compact compute-efficient sparse distributed representations. Sparse Predictive Hierarchies are the closest thing I've seen so far, but their fixed log2 prediction interval at each successive level of the hierarchy means that it learns the same patterns over and over, when you want something that has overlap so that it learns when a temporal pattern is the same pattern no matter what time it started at. I also think that instead of having a fixed scaffolding which knowledge forms over, the scaffold itself should be built as a product of experience. Something more like MONA. The problem MONA has is that its perceptual inputs are limited to clustering entire sense vectors, so that if it sees a ball in a room during the day it won't think it's in the same place when it sees the same ball in the same room at night. Individual portions of senses much be treated as equally important, not the sense as a whole generating a single input signal. MONA's method of input leaves no room for generalizing perception, only volition. People have been experimenting with promising novel algorithms, getting us closer, but a lot of people/corporations nowadays are just looking to get hired or make a quick buck and so are only dealing in backpropagation, when that's not the way forward. It's a lateral move that will invariably result in a dead end, eventually. It will always have its place, and its uses, but it's not going to result in autonomous sentient robotic beings that do everything that only humans can do. A real time learning algorithm whose learning and abstraction capacity is limited only by the hardware its running on is coming. Nobody knows how to build it yet, but I estimate less than 5 years until it comes to be, and it's going to blow everyone's mind.

    • @timl2k11
      @timl2k11 4 дні тому +1

      Yes. We need a neural network architecture that enables learning on the fly. Biological brains adjust parameters in real time. Instead of pretraining, the training is done continuously in response to new inputs. I imagine this would be computationally expensive though and possibly impractical.

  • @timl2k11
    @timl2k11 3 місяці тому +3

    The music that starts @ 25:40 provides a nice transition and nicely conveys the future potential of the technology.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      thanks! I tried hard to make sure the music didn't 'get in the way' of the content

  • @kingdodongo4126
    @kingdodongo4126 3 місяці тому +3

    You have a magical ability to explain with such eloquence and clarity that you make me feel intelligent. All that lead up to the moment (and also from your previous videos) when you explain Domain randomization 19:45 “you actually need less precise simulation” that realization felt like an explosion in my mind. Thanks for your channel man

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +2

      @@kingdodongo4126 woo! So glad that moment worked , I remember when I first figured that out too

  • @notgaybear5544
    @notgaybear5544 3 місяці тому +5

    Another amazingly lucid video. Thank you! By the end it feels like were just getting started.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +2

      :) Yes definitely, I actually had a whole other part on world models (model based) I had to cut, and so setting up the next video for that

    • @notgaybear5544
      @notgaybear5544 3 місяці тому

      @@ArtOfTheProblem love it! Can't wait.

  • @ko95
    @ko95 2 місяці тому +1

    HALF an HOUR?! Why was this not shown in my subscriptions feed?! Most captivating content on the tube! Thank you

    • @ArtOfTheProblem
      @ArtOfTheProblem  2 місяці тому +1

      Thank you! please help me share it , I don't know why algo ignored it this time. Perhaps the click through rate or something.

  • @DefaultFlame
    @DefaultFlame 12 днів тому +1

    I've only watched a few of your videos so far, but I've fallen in love with your in-depth yet easily understandable explanations of how things work, what discoveries lead to what innovation and how it did so, and the way you avoid both the unreasonable hype of techbros and PR departments while also avoiding the equally unreasonable pessimism and negativity of people like Yann Lecun and Noam Chomsky.

    • @ArtOfTheProblem
      @ArtOfTheProblem  12 днів тому +1

      this comment means a lot, thank you. I try to stick to my lane and provide value where i can

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 днів тому +1

      THANK you this means a lot to me. If you can help share my new video around any of your networks today it might catch fire and would help me support the channel: ua-cam.com/video/PvDaPeQjxOE/v-deo.html

    • @DefaultFlame
      @DefaultFlame 11 днів тому +1

      @@ArtOfTheProblem I don't really do the whole "social media" thing, sorry.

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 днів тому +1

      @ leaving a comment and like is more than enough ! ( me neither :)

  • @77batering
    @77batering 3 місяці тому +2

    I love this channel so much I only wish you made videos faster but it's always such engaging content I can see why it takes a while

  • @arc8dia
    @arc8dia 3 місяці тому +3

    11:00 when you hear this music, magic is about to happen

  • @Dr.Menendez
    @Dr.Menendez 3 місяці тому +5

    All your videos are excellent. Congratulations.

  • @jimlbeaver
    @jimlbeaver 3 місяці тому +3

    Really great video! Awesome summary of the history of RL.… Very clear. Nice job.

  • @KalebPeters99
    @KalebPeters99 3 місяці тому +2

    Awesome stuff!! I just love the way you explain things 🙏💕
    I feel like I'm closer than ever to actually understanding AI 😅😅

  • @leosaucedo
    @leosaucedo 10 днів тому +2

    Among the best content out there, thank you 🙏

    • @ArtOfTheProblem
      @ArtOfTheProblem  10 днів тому

      Thanks! did you check out the latest video i just posted a follow up

  • @JiHa-Kim
    @JiHa-Kim 3 місяці тому +1

    Thank you for your videos, I love the way you carefully present historical information to build up to modern ideas, it is better than any other channel out there. Keep up the great work!

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      thank you, it's a ton of work and I appreciate this

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 днів тому

      new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ua-cam.com/video/PvDaPeQjxOE/v-deo.html

  • @Lightconelabs
    @Lightconelabs 3 місяці тому +1

    His videos have helped my students get interested in science, AI and computation!

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      @@Lightconelabs thrilled to hear it what ages ?

  • @NanoAGI
    @NanoAGI 3 місяці тому +3

    AI changes on a weekly basis and it's hard to keep up, but you ground it all to its roots. Thanks for your videos and history.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      @@NanoAGI yes I know the feeling , I don’t see others doing this so happy it helps

  • @AdamJeffries-r4f
    @AdamJeffries-r4f 3 місяці тому +1

    Thank you for the credit at the end. You compressed the data well and thus, the info regarding the value function was more easily understood, in my opinion.

  • @akif1633
    @akif1633 2 місяці тому

    Awesome work my friend! It's hard to wait for the next video to come out. ❤

  • @nowweknow.
    @nowweknow. 3 місяці тому +1

    Such a good video man, really enjoyed it amd subbed ✌🏼

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      thank you for the comment, the algo seems to not like this video!

    • @nowweknow.
      @nowweknow. 3 місяці тому +1

      ​@@ArtOfTheProblem It happens

  • @ai_outline
    @ai_outline 3 місяці тому +2

    Really enjoy watching computer science content, particularly in the subfield of AI. Please don’t ever stop ❤️

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      Appreciate the support. consider supporting AOP! www.patreon.com/artoftheproblem

  • @AfreediZ
    @AfreediZ 17 днів тому +2

    One of the greatest inspiration for me is you sir, thanks alot❤. Love you from india

    • @ArtOfTheProblem
      @ArtOfTheProblem  17 днів тому

      thank you! glad you found this

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 днів тому

      I agree :) also If you can help share my new video around any of your networks today it might catch fire and would help me support the channel. I appreciate your help! ua-cam.com/video/PvDaPeQjxOE/v-deo.html

  • @__m__e__
    @__m__e__ 3 місяці тому +1

    Fantastic and engaging as always.

  • @maryjanecruise1674
    @maryjanecruise1674 3 місяці тому +3

    Brilliant work!

  • @cesar_ai_eng
    @cesar_ai_eng 3 місяці тому +1

    I'm reinforced to hit the Like button on all your videos

  • @posthocprior
    @posthocprior 3 місяці тому +1

    I'm an inventor who has started to work on industrial robots (mostly for warehouses). This was excellent. I'd like to suggest a video that formally treats Moravec's paradox. Specifically, why the computational methods that have been used on language and games, such as monte carlo tree search and autoregressive generative methods, don't work on physical space. If you have time, you could also explore why geometrical approaches to tackle this problem, such as cat(k) spaces work in two dimensions but fail in three dimensions. I'd love to see an historical approach of why we seem to be far from, say, a robot that could do your dishes and so close to, say, a computer that could be your child's math tutor.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      @@posthocprior thank you ! I will indeed follow this thread

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 днів тому

      new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ua-cam.com/video/PvDaPeQjxOE/v-deo.html

  • @shawnbibby
    @shawnbibby 3 місяці тому +4

    lucky me for this video today!

  • @vinniepeterss
    @vinniepeterss 3 місяці тому +5

    great video as always!

  • @Therandomlaugh66
    @Therandomlaugh66 3 місяці тому +1

    Fantastic Video mate, keep up the great work!

  • @Joxus
    @Joxus 2 місяці тому +2

    The UA-cam algorithm is not doing this video the justice it deserves. Maybe Google needs to up their reinforcement learning game for UA-cam recommendations.

    • @ArtOfTheProblem
      @ArtOfTheProblem  2 місяці тому +1

      thanks :) I was frustrated with this video not getting shared at all by algo. my only guess is I was messing with thumbnail ideas when I first published. part of my thinks I should republish it but I never do that....

  • @idegteke
    @idegteke 3 місяці тому +1

    The substantial but systematically overlooked problem with this method in the video remains that solving real-life problems autonomously require a system built from identical fundamental code/data sets with the inherent capability to figure out what to do by having just a few (even, potentially one single) guideline to follow to reach everything by a uniform rating/feedback system that is rewarded (similar to putting an extra bead into the box of the most successful move) that, however, requires us, creators of potential future AI systems, to identify a general marker of success - potentially using a method to measure the level of actual complexity of a set of code/data, and giving a “bead” to the most complex fundamental code/data set. That’s what I’m working on for some 15 years, by the way.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      cool do you have anything i can read?

    • @idegteke
      @idegteke 3 місяці тому +1

      ​@@ArtOfTheProblem Some of my verbal notes (partly in Hungarian:) are uploaded to my channel but nothing tangible yet. Understandably, they are not watched at all. As I once said: no matter how quickly you can run, how skillfully you climb, how strong, resilient and determined you are if you don’t turn precisely in the correct direction before even taking the very first step. I really doubt that anyone currently has clear and valid idea about either the definition of life (even matter, actually), intelligence and consciousness and the relation between these assumed categories. We want to solve a huge crossword in an ancient language that nobody speaks anymore, where every word is crossing every other and the definitions are merely moods, dreams and songs of birds. As for myself, working on this field in most of my time, I’m trying just to turn in the right direction for the last 15+ years without taking any actual steps (e.g. writing a line of code), and just letting others climb endless walls and run sideways vehemently - with pathetic results like generating mindless eye candies or imitating (a pretty retarded) human’s responses. I assume that I will KNOW when I’m ready to take a step… not even further… just to take an ACTUAL step at all. Once it actually happens (and it does coming closer this year) and it will be the right direction, I will upload videos with some tangible results.

  • @Coder.tahsin
    @Coder.tahsin 3 місяці тому +2

    After a long time ❤️

  • @sirishkumar-m5z
    @sirishkumar-m5z 3 місяці тому +1

    It's amazing how reinforcement learning works. If you're interested in learning more about AI, other tools may provide you with more features and insights.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again

  • @Arrrghmageddon
    @Arrrghmageddon 3 місяці тому +1

    This was an incredible video. Thank you!

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      thank you for sharing, glad people are finding this

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again

  • @lejb8962
    @lejb8962 3 місяці тому

    Thanks for the excellent video... I started watching your Info theory & machine learning videos as a wide-eyed kid who loved the idea of AGI. After reading Asimov (and other, more controversial authors) and living through GPT/DALLE, I became convinced that AI is not the way forward. Nonetheless, I remain a huge fan of your videos. They are by far the most informative videos on the topic I have ever seen.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      @@lejb8962 thanks for sharing , what path ahead are you excited about ?

    • @lejb8962
      @lejb8962 3 місяці тому +1

      @@ArtOfTheProblem Oh, me? I'm into homesteading now; I've become a luddite fundamentalist type. 😅

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      @@lejb8962 love it!

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      @@lejb8962 i haven't owned a phone since 2009

    • @ArtOfTheProblem
      @ArtOfTheProblem  11 днів тому +1

      new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ua-cam.com/video/PvDaPeQjxOE/v-deo.html

  • @theK594
    @theK594 3 місяці тому +1

    Again the top❤!

  • @ronak14p
    @ronak14p 2 місяці тому +1

    All time great video, thanks so much

  • @1msirius
    @1msirius 3 місяці тому +2

    whenever I see your video, I just click it

  • @mike-q2f4f
    @mike-q2f4f 3 місяці тому +1

    Wow!! I sent this to my kids.

  • @viniciusnoyoutube
    @viniciusnoyoutube 3 місяці тому +1

    Very cool

  • @dr.mikeybee
    @dr.mikeybee 3 місяці тому +3

    This is a great essay. Thanks! BTW, I think of RL as the opposite approach to gradient decent. With gradient decent we look at error and use the chain rule to update weights. With RL we add noise to the weights then test for error. BTW, you can use action tokens in transformers. In other words, the token is the representation for an action. We can collect actions as tokens from a customer service representative's actions placing then within a transcript, for example.

  • @gloobark
    @gloobark 3 місяці тому +1

    18:16 damn he's bussin it down

  • @civismesecret
    @civismesecret 3 місяці тому +1

    This video was too short bro I need to watch it twice

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      Thrilled to hear it....the script was so so long but I had to streamline it (remonded all model based methods)

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again

  • @vinniepeterss
    @vinniepeterss 3 місяці тому +2

    ❤❤

  • @warpdrive9229
    @warpdrive9229 3 місяці тому +1

    We have come a long way. But we have miles to go before we sleep.

  • @whatarewaves
    @whatarewaves 3 місяці тому +1

    Instead of training RL models from scratch, we appear to be pivoting to combining LLM knowledge with action space choices to form pseudo RL models. Is this the best way forward?

    • @whatarewaves
      @whatarewaves 3 місяці тому +1

      It seems like llms have pulled attention away from traditional RL techniques for improvement of general systems, including right as we were developing better and better pure RL systems

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому

      @@whatarewaves this seems to be the case and I’m tracking this as we speak

  • @JavierSalcedoC
    @JavierSalcedoC 3 місяці тому +4

    I remember trying to run character recognition software back in 1992 or 1993

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      Love it , how did your experiments progress ?

    • @JavierSalcedoC
      @JavierSalcedoC 3 місяці тому +2

      @@ArtOfTheProblem Brother best friend dad happened to travel to the US a lot and in one trip they bought one of those scanners with wheels (similar to a Geniscan 4000) which came with a 3 1/2 floppy disk with this character recognition software. IIRC it was made to only recognize printed letters, not handwritten ones. It was a stand alone software, separate from the one used to scan pictures, with a DOS shell like UI where you had to load the jpg image and it would spit out a txt file as a result. I managed to make it work but I remember my 386 PC to constantly crash when running it.
      No mention to AI anywhere in sight; it was just a cool software back then

  • @sudjen
    @sudjen 2 місяці тому +1

    I watched all of your UA-cam videos and yet UA-cam didn’t recommend this to me? Weird

  • @djayjp
    @djayjp 3 місяці тому +2

    To be clear there's no actual reward or punishment occurring, even simulated, it's just selecting for or against a particular response/state. These are just the silly words used for this approach.

  • @NeuroPulse
    @NeuroPulse Місяць тому

    How long until we have robots that look like Haley Joel Osment walking around?

  • @zerotwo7319
    @zerotwo7319 3 місяці тому +3

    I still think these systems could be better.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +2

      definitely, but I thnk we are at the cusp of a big leap

    • @intptointp
      @intptointp 3 місяці тому +1

      Lol. Yes.
      That’s what motivates people to learn them.

  • @mostlynotworking4112
    @mostlynotworking4112 3 місяці тому +1

    For the algo

  • @dadsonworldwide3238
    @dadsonworldwide3238 3 місяці тому +1

    Amazing work 👏 unfortunately we've already dug out complexity in these areas in 1300-1500s then in eccentric movements of thought creating English, English law, steam engine archeology etc etc etc
    This is why the great debate was warned ⚠️ no one would have a religious vs science issue when darwin & evolution anthromorphized grand unified theory is ancient old world beliefs lol
    Thermodynamical systems like say electronic plasticity of brain organoids injected dead with rabies still plays ping pong because has nothing to do with learning or feelings when dogs/valves are mechanically switched one and off or like a polaroid flashed image on a canvas picture.
    This anylitical y axis duslistic brain + primordial self soul agency energy density within humanity Triangulates thermodynamical systems similarly but it's not the same its just 1 part of many.
    Things like curses and blessings standardized weights and measure addition and subtraction emerging energetic properties e =Mc idealistic forces faith and physical lawisms works we plagerize correlate effortlessly prescribed upon the world until 1500s when we learned how effortlessly overcoming horizon paradoxes we were.
    Shocked to learn how things like how mosaic commandments English law moral realism was really in thermodynamical systems in the world around us .
    1900s structuralism platonic wartime posterity everything physicalism everything starts in Greece revisionist history curriculum great debate anthrosphy was one last time exhausting old world beliefs math mapping
    Pre 1500s name & order face value dualistic form & shape 1890s-2010. Living in whatsboutism nilhisms as if math was foundational judge a book by its cover era movement of thought excersize.
    Right before was all about building library & museum with singularity fetish because we obviously use a letter . The old assyrian babylonian Greek 3 body problem in space yoo hoo woo uncertainty is not here on earth realism it's a prenticious clocklike broken tune weights and measure.
    Since mass discplament of Europe Asia Africa into America UK it made sense to help these immigrants in school and succed as new borders drawing new nations adjusted.
    So now we have a very prenticiously informed perception in our society when we need the best long-term decisions making skills

    • @dadsonworldwide3238
      @dadsonworldwide3238 3 місяці тому

      Its unfortunate that separatists puritan pilgrim classical American where pushed out into hardware that knows the key to the cosmos esoterica America longitude and latitude better than what colleges incentivized and draw from which Is very anthromorphized

  • @Blooper1980
    @Blooper1980 3 місяці тому

    WOw.. that background noise!!!!!!!!!!! Really?

  • @feynstein1004
    @feynstein1004 3 місяці тому +1

    Still haven't solved the long-term memory problem, I see 😂

  • @Technologysciencefruit
    @Technologysciencefruit 3 місяці тому +1

    Yrlui

  • @ginogarcia8730
    @ginogarcia8730 3 місяці тому

    learned to feel? hmm

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      @@ginogarcia8730 thoughts on different title ? I’ve been experimenting. though I like the analogy of value function

    • @ginogarcia8730
      @ginogarcia8730 3 місяці тому

      @@ArtOfTheProblem nah I thought it was bad clickbait but you know what you're doing fasho, so it's fine to keep it. It does catch the eye. And I think somehow with LLMs they can 'feel' some thing when hinged on emotional words and the connotation of some words.

    • @ArtOfTheProblem
      @ArtOfTheProblem  3 місяці тому +1

      @@ginogarcia8730 if you have another non click bait title let me know as I'm still not seeing good click through on this title. "THE ROBOTS ARE COMING" :)