NEW: Better In-Context Learning ICL, Improved RAG (Harvard)

Поділитися
Вставка
  • Опубліковано 7 січ 2025

КОМЕНТАРІ • 30

  • @code4AI
    @code4AI  3 дні тому +1

    With the automatic audio dubbing from UA-cam /Google you hear a synthetic voice in your regional language.
    To hear my original voice in English, switch to "Default" or "English" in the settings. Thank you.

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 22 години тому

    you have an excellent voice & reasoning for these type of videos, great content as usual. Personally I believe the major painpoint in RAG is it's overlooked simpliication of the back-end :
    the reliance on a single vectorstore is a major contributor to hallucinations (as you point out). We found that the biggest impact in decreasing hallucinations is improved data segregation & preperation pipelines while not solely relying on vectors (fulltext-search, bm25, hybrid etc.). Having said that its still an incomplete puzzle and in-context learning /in-context fine-tuning are very interesting. Cheers!

  • @mrorigo
    @mrorigo 3 дні тому +4

    Default voice is the best. Took me a couple of weeks to get used to your English, but now it feels super-natural. Keep it coming, super-appreciate your work!

  • @En1Gm4A
    @En1Gm4A 3 дні тому +7

    lets go - pls more knowledge graph + LLM stuff.
    This is the future. Think about agents showing a planned path for task execution __BEFORE__ they actually execute it. That path could be displayed on a graph and reviewed and approved :-D Would mean much for agent savety

  • @wgabrys88
    @wgabrys88 3 дні тому +2

    Dude is sharing knowledge like every day was one year of inprovement❤

  • @En1Gm4A
    @En1Gm4A 3 дні тому +7

    cant wait for semantic graph memory and task planning on that semantic graph abstraction. This will enable true AGI

    • @Wotevz
      @Wotevz 3 дні тому

      Tell me more … running but not released … open to beta testers

    • @matt.lehodey
      @matt.lehodey День тому

      @@WotevzI wanna know more too 🤣

  • @sgttomas
    @sgttomas 3 дні тому +2

    thank you for providing all the context for this video and for bringing this research to our attention!

  • @xt-89907
    @xt-89907 3 дні тому +1

    This is great. The natural next step is to expand this to more complex tensor decomposition techniques, even autoencoders. Just like with the Anthropic MI paper. If we can get a mapping of this meta knowledge graph, then we can incorporate reinforcement learning to optimize representations dynamically in-context. This could be very powerful for better Test Time Compute, improved self-awareness of the model, and so on. But just solving online learning and making it sample efficient would be a major barrier removal for the usefulness of Agents.
    What would also be great is to explicitly include a causal graph as an optional bias, writing to change covariate features as necessary. If the TLLM is essentially a kind of causal model, you could make active learning very efficient.

  • @kevinpham6658
    @kevinpham6658 3 дні тому +1

    Geez, left us on a cliffhanger! Can't wait until the next video.

  • @gunterstrubinsky9452
    @gunterstrubinsky9452 3 дні тому

    'elon' is a 4-letter word in the academic sub-net!

  • @IdPreferNot1
    @IdPreferNot1 2 дні тому

    Energy efficiency in an llm seems like an "obvious" organizing principle. Not sure how that translates to being able to see it visually similar... I guess any further abstraction of the true form would require more energy for a transformation?

  • @maertscisum
    @maertscisum 3 дні тому

    Do you plan to cover KAG?

  • @fdavis1555
    @fdavis1555 3 дні тому +1

    Fascinating research!

  • @dmytroaleinykov4088
    @dmytroaleinykov4088 3 дні тому

    Thank you for your amazing videos!

  • @samarthpatel8377
    @samarthpatel8377 3 дні тому

    This is good! Better alignment and the sauce for AGI

  • @syntaxstreets
    @syntaxstreets 3 дні тому

    thank you, you are awesome, i recommend your channel when someone talk about ai😀

  • @TheEtrepreneur
    @TheEtrepreneur 3 дні тому +1

    Salutations Mr Discover AI, you're becoming Epic. Keep it up. 🏆🏆🏆
    p.s. Apple > bird > sand > sun > plane > opera ! Got it at first sight, DAGs rocks. Is this a 90% computational efficiency on traditional LLM operations? looks like it. 💥💥

  • @augmentos
    @augmentos 3 дні тому +1

    Goooood morning ❤

  • @thingX1x
    @thingX1x 3 дні тому +1

    I have a chatbot with a graphrag using word2vec. When I add new info word2vec is retrained on this new info and used for prompt augmentation. Is this ICL? The llm only generates new data semantically similar with the word2vec. WOuld appreciate your input, or if I could even send you it. I even have a structured data .db file that updates structured data per message, file upload, or website scrape.

  • @dairin0d
    @dairin0d 3 дні тому

    Thanks for explaining interesting papers!
    This kind of reminds me of the the idea that knowing the "distances" between all points (concepts) of a dataset (essentially, a weighted graph) is enough to define its "internal geometry", so maybe these "random/circular walks" dynamically adjust LLM's representation to match the observed "distances" between "nearby" words/pairs? (Just speculating; I haven't yet read the paper in detail, so maybe this is just a differently phrased view on the same mathematics they describe.)
    By the way (out of curiosity): have you heard of hyperdimensional computing / vector symbolic architectures? It seems to have quite a bit of overlap with what neural networks are doing geometrically, but what I found especially interesting about it is that it provides a formal mathematical approach to define (and operate on) complex data structures in vector space :-)

  • @sndrstpnv8419
    @sndrstpnv8419 2 дні тому

    can you share link to code or paper ?

  • @minissoft
    @minissoft 3 дні тому

    Why do we think in 2D and 3D? We should think in n dimensions.

    • @justinnine4940
      @justinnine4940 2 дні тому

      because the input grid structure is 2D, you need to down project the latent structure to the same dimension in order to see it

  • @RaviPrakash-dz9fm
    @RaviPrakash-dz9fm 3 дні тому

    Damn!

  • @stevehall794
    @stevehall794 3 дні тому +1

    nothing useful to learn here

  • @VictorGallagherCarvings
    @VictorGallagherCarvings 3 дні тому

    I don't think that over righting facts with opinions is a particularly good idea.

  • @IanTindale
    @IanTindale 2 дні тому

    I predict a day in the future where we have ‘emptied’ LLMs (well, not language, but any capturable variable behaviour out there in the outside world, eg, ducks suddenly deciding to move to over there instead of staying here) and these will be like our current LLMs but taken a stage further by ‘emptying’ them of everything they’ve learned, leaving behind only the fact that they’ve had training - these emptied models will then proceed to learn anew like baby animals or people, only containing the minimum or ‘instinctual’ learning, but empty of facts, causal, experiential, observational ‘knowledge’ until it has reached out and filled itself up again - these models will be tiny, just little seeds, and everyone can get their own, or have a few, like pets, and they grow up to have distinct personalities (unless they start networking and sharing their knowledge and discussing things among themselves)