New Summarization via In Context Learning with a New Class of Models

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ • 51

  • @hienngo6730
    @hienngo6730 6 місяців тому +25

    Criminally underrated channel. One of the absolute best AI/LLM UA-cam channels that somehow only has 55K subs?!? Thank you for all of your hard-earned insights; very useful to jumpstart our own projects.

  • @coolmcdude
    @coolmcdude 4 місяці тому

    I love the part of this video that goes over the types of summarization. That part could be a video on its own.

  • @supercurioTube
    @supercurioTube 6 місяців тому +2

    I'm listening to your video for months now and as I'm transitioning towards building LLM apps myself, I'm really grateful for the insights you've been sharing all along.
    Its invaluable to learn from someone who's been building realistic, real world products based on LLMs while following on the research closely.

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +4

      Thanks this is exactly what I was aiming to do with the channel, I have never desired to be a "youtuber" I started to show some friends so cool stuff with LLMs and it took off. I try not to hype stuff just show what can be done etc with various models.

  • @davidtindell950
    @davidtindell950 6 місяців тому +3

    I have been reviewing many of your YT Videos and evaluating your many code examples.
    This video is certainly different in that it makes us think about how to transistion from the
    current state and applications of LLM's to new personalized and curated practical solutions
    -- especially by applying smaller, faster, lower-cost "variant" LLMs like Anthropic's Haiku ...
    I agree that we can find a "middle ground" between Sam A.'s two so-called "choices" !

    • @davidtindell950
      @davidtindell950 6 місяців тому +1

      Now, going back to review your earlier "Mastering Haiku Video" !

  • @kenchang3456
    @kenchang3456 6 місяців тому +2

    Another excellent video. Thanks for pushing forward the practicalities of using the variety of models and services that could be appropriate for your project and where you are in the progress of your project.

  • @HowtoSmartWork
    @HowtoSmartWork 6 місяців тому +2

    This kind of content is really impressive. I was working on note taking app and trying to build a scalable app with team.
    Had alot of Challenge after watching your explanation I was able to relate to this.

  • @sayanosis
    @sayanosis 6 місяців тому

    You single-handedly explain literally everything someone needs. Thank you so so much for what you do ❤

  • @DannyGerst
    @DannyGerst 6 місяців тому

    Great idea. I did something with grouping sentences together for a topic (Louvain community detection algorithm), so that sentences with the same semantic meaning are grouped together. Working incredible great for books chapter by chapter summaries.The benefit is that topics what you called sections are grouped even if the topic is handled in later sections again. But in the end it was Map Reduce. So I am curious to see the result combined with your new system.

  • @ozind12
    @ozind12 6 місяців тому +5

    i could not find the link to the code for the sumarization app being talked about in the video? would be interesting to see flow

  • @bennie_pie
    @bennie_pie 6 місяців тому +1

    Your talk summarised my project...i too have been using Claude, except opus however my free API access ends in a few days and so in order to build something which could go live trading down to Haiku but with multiple iterations was just starting to dawnn and then boom you're solving issues or suggesting use cases I hadnt even considered! This video has been absolute gold - thank you

    • @jarad4621
      @jarad4621 6 місяців тому

      Yeah when Claude come out I didn't give a crap about the fancy models it was the use cases for the good but cheap.models like haiku and now llama 3 that excite me, low cost but still effective = $$$$$$. My phi3 agent swarm with agentic self reflection and auto error correcting and iterative improvement and quality assurance is going to be EPIC and free, trust me learn agents asap the time is coming be ready and be in the front

    • @bennie_pie
      @bennie_pie 6 місяців тому

      @@jarad4621 Yeah local agents that work while you sleep is the way. But it can be like hearding chickens.... time-consuming and you still get shit. Its that fine balance of an intelligent model, good prompting and good oversight. CrewAI seems decent or is it better to DIY?

  • @MeinDeutschkurs
    @MeinDeutschkurs 6 місяців тому

    I love your thoughts!
    In the moment each word takes flight, spoken, penned, or whispered into the night, we dream a future bright where models converse, their voices intertwine.
    An IoT daydream, woven from the threads of thought and machine's silent hum.

  • @puremajik
    @puremajik 5 місяців тому

    Thank you this was very instructive. Can you recommend the best libaries for : 1) sectioning a document based on topic changes, 2) summarizing each section while maintaining contextual continuity and coherence, and 3) combining the summaries into a cohesive final summary?
    I'm thinking something like transformers (Hugging Face), spaCy, Gensim, pandas?

  • @ralph5768
    @ralph5768 6 місяців тому

    Thanks Sam! I have been really tinkering about Summarization and this helps a LOT. Subscribe + like

  • @stephaneleroi8506
    @stephaneleroi8506 6 місяців тому

    Exellent. Do you know where the summerization with sections and full document in each section is implemented ?

  • @reza2kn
    @reza2kn 6 місяців тому

    Brilliant video Sam! 🤗 Great job! Learned a ton!

  • @SergioMunozGonzalez
    @SergioMunozGonzalez 6 місяців тому

    ty so much for the video sam, do you have any implemementation of this new summarization method?
    Thank you in advance

  • @TheMelo1993
    @TheMelo1993 6 місяців тому

    Great content 👍! Do you have any suggestions on how to implement this? Or a repo?

  • @willjohnston8216
    @willjohnston8216 6 місяців тому +1

    Another great video. Sam have you found any methods for having the LLM spend more time on the analysis. The results I'm getting seem to be generic and something summarized from the web. I'd like to find a way to force more thinking through the problem set.

    • @samwitteveenai
      @samwitteveenai  6 місяців тому

      This is a really good question. I think there are at least 2 paths to this. 1. is better alignment training where it can push back and clarify things better. A version of this (perhaps not the best version eventually) will probably come in the next OpenAI model on Monday. This kind of clarification in analysis is a very important one for Self Recursive Learning. This is something I have been running a lot of tests on and testing some unreleased models with but no amazing results I can talk about yet. 2. You can do something similar by prompting from multiple angles etc. eg have 1 prompt that rewrites multiple questions or angles of analysis. This is a bit of what the summarization prompts do in the app I show.

  • @comfixit
    @comfixit 6 місяців тому +2

    Content and commentary was top notch, thank you for this video. An area for improvement is that you way overused the video B-Roll. First half of the video was kind of off-putting. Last half of the video the B-Roll was all good as it related to the subject. Example you are talking about Anthropic family of models and you show logos of Anthropic, pricing charts, performance charts etc... This is great stuff. But at the beginning you are talking and we are seeing animations of Robots with a sticker that says Hello. That doesn't work. I would rather see a talking head in those cases if you don't have B-Roll that is strongly related to the content.
    Just a personal preference but very much enjoyed the video content.

  • @ralph5768
    @ralph5768 6 місяців тому

    Do you have a code example for this new type of summarisation?

  • @experter_analyser
    @experter_analyser 6 місяців тому

    I have always fine the videos very interesting and educative with different new thought. ❤

  • @janalgos
    @janalgos 6 місяців тому

    My concern with smaller models is the relatively higher hallucination. What has your experience been with Haiku when it comes to hallucination?

    • @jarad4621
      @jarad4621 6 місяців тому

      Agentic patterns reflection review, iterative improvement, qa agents collaboration, one master opus overseer to manage, etc this will solve all your concerns about quality and still be super cheap

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +1

      I don't think the hallucinations are that much more of a problem. never use an LLM for facts, use the context for that. The advantage with the cheaper calls is you can do self reflection etc. to double check these.

    • @janalgos
      @janalgos 6 місяців тому

      @@samwitteveenai would be neat to see a tutorial on how to use those techniques to reduce instances of hallucinations and improve overall response quality for the smaller models

  • @murilocurti1474
    @murilocurti1474 6 місяців тому

    Great explanation! As usual 😃 Do you think it’s possible to do the same process of sectioning using gpt3.5 turbo?

    • @samwitteveenai
      @samwitteveenai  6 місяців тому +1

      Yes but Haiku, Llama3 and another model coming out next week are better than 3.5 for this.

    • @murilocurti1474
      @murilocurti1474 6 місяців тому

      @@samwitteveenai Thanks!!!

  • @rajesh_kisan
    @rajesh_kisan 6 місяців тому

    Can you share the code, or prompts at least? I tried implementing it but faced challenge with creating sections.
    I'm using Llama 8B in my local, and also tried Llama 70B.
    If you can share it, it'd be great help.

  • @123arskas
    @123arskas 6 місяців тому

    I still can't differentiate the "New Summarization System" you talked about VS "Refine Method".
    Refine tends to keep the context too of each chunk.
    The entire video felt like a promotional Ad for "Haiku"

    • @samwitteveenai
      @samwitteveenai  6 місяців тому

      this is quite different in that you can't do refine parallel you have to queue and wait. regarding the ad for haiku I do think it is in class of its own until new models get announced next week.

  • @micbab-vg2mu
    @micbab-vg2mu 6 місяців тому

    Great video - thank you - )

  • @mickelodiansurname9578
    @mickelodiansurname9578 6 місяців тому

    @Sam Witteveen Has anyone ever told you that you are the spitting image of the Poker player Daniel Negreanu?

  • @josephroman2690
    @josephroman2690 6 місяців тому

    I would very much like contribute to this project, if it is possible if not, at least would like to be one of the testing users

  • @SonGoku-pc7jl
    @SonGoku-pc7jl 6 місяців тому

    i like so much all video :) thanks!!!

  • @dhrumil5977
    @dhrumil5977 6 місяців тому

    Download 😅

    • @JacobAsmuth-jw8uc
      @JacobAsmuth-jw8uc 6 місяців тому

      What?

    • @dhrumil5977
      @dhrumil5977 6 місяців тому

      @@JacobAsmuth-jw8uc the video haha

    • @explorer945
      @explorer945 6 місяців тому

      ​@@dhrumil5977 ah possibility of getting it deleted?