New Summarization via In Context Learning with a New Class of Models

Поділитися
Вставка
  • Опубліковано 18 чер 2024
  • In this video I discuss some of the recent changes in building LLM apps and choosing which LLMs to use. I also show how I used some of these changes to build a Note taking app that creates summaries and long form notes
    🕵️ Interested in building LLM Agents? Fill out the form below
    Building LLM Agents Form: drp.li/dIMes
    👨‍💻Github:
    github.com/samwit/langchain-t... (updated)
    github.com/samwit/llm-tutorials
    ⏱️Time Stamps:
    00:00 Intro
    01:01 Personalization and Curation
    01:18 Personalization
    02:06 Curation
    05:20 The State of LLMs
    09:29 Long Output Use Cases
    11:20 Claude 3: Haiku
    12:10 Why Haiku
    13:31 Haiku Challenges
    14:23 Metaprompt
    14:57 Haiku Exemplars
    17:29 Summarizations
    17:32 Types of Summarization
    18:59 Simple Stuffing
    19:21 Map Reduce
    20:06 Refining our Calls
    20:21 Map ReRank
    20:31 New Summarization System
    23:28 Sectioning
    23:49 Advantages
    24:17 Disadvantages
    25:01 Conclusion
  • Наука та технологія

КОМЕНТАРІ • 50

  • @hienngo6730
    @hienngo6730 Місяць тому +22

    Criminally underrated channel. One of the absolute best AI/LLM UA-cam channels that somehow only has 55K subs?!? Thank you for all of your hard-earned insights; very useful to jumpstart our own projects.

  • @ozind12
    @ozind12 Місяць тому +5

    i could not find the link to the code for the sumarization app being talked about in the video? would be interesting to see flow

  • @kenchang3456
    @kenchang3456 Місяць тому +2

    Another excellent video. Thanks for pushing forward the practicalities of using the variety of models and services that could be appropriate for your project and where you are in the progress of your project.

  • @HowtoSmartWork
    @HowtoSmartWork Місяць тому +2

    This kind of content is really impressive. I was working on note taking app and trying to build a scalable app with team.
    Had alot of Challenge after watching your explanation I was able to relate to this.

  • @supercurioTube
    @supercurioTube Місяць тому +1

    I'm listening to your video for months now and as I'm transitioning towards building LLM apps myself, I'm really grateful for the insights you've been sharing all along.
    Its invaluable to learn from someone who's been building realistic, real world products based on LLMs while following on the research closely.

    • @samwitteveenai
      @samwitteveenai  Місяць тому +3

      Thanks this is exactly what I was aiming to do with the channel, I have never desired to be a "youtuber" I started to show some friends so cool stuff with LLMs and it took off. I try not to hype stuff just show what can be done etc with various models.

  • @davidtindell950
    @davidtindell950 Місяць тому +2

    I have been reviewing many of your YT Videos and evaluating your many code examples.
    This video is certainly different in that it makes us think about how to transistion from the
    current state and applications of LLM's to new personalized and curated practical solutions
    -- especially by applying smaller, faster, lower-cost "variant" LLMs like Anthropic's Haiku ...
    I agree that we can find a "middle ground" between Sam A.'s two so-called "choices" !

    • @davidtindell950
      @davidtindell950 Місяць тому +1

      Now, going back to review your earlier "Mastering Haiku Video" !

  • @sayanosis
    @sayanosis Місяць тому

    You single-handedly explain literally everything someone needs. Thank you so so much for what you do ❤

  • @reza2kn
    @reza2kn Місяць тому

    Brilliant video Sam! 🤗 Great job! Learned a ton!

  • @DannyGerst
    @DannyGerst 18 днів тому

    Great idea. I did something with grouping sentences together for a topic (Louvain community detection algorithm), so that sentences with the same semantic meaning are grouped together. Working incredible great for books chapter by chapter summaries.The benefit is that topics what you called sections are grouped even if the topic is handled in later sections again. But in the end it was Map Reduce. So I am curious to see the result combined with your new system.

  • @bennie_pie
    @bennie_pie Місяць тому +1

    Your talk summarised my project...i too have been using Claude, except opus however my free API access ends in a few days and so in order to build something which could go live trading down to Haiku but with multiple iterations was just starting to dawnn and then boom you're solving issues or suggesting use cases I hadnt even considered! This video has been absolute gold - thank you

    • @jarad4621
      @jarad4621 Місяць тому

      Yeah when Claude come out I didn't give a crap about the fancy models it was the use cases for the good but cheap.models like haiku and now llama 3 that excite me, low cost but still effective = $$$$$$. My phi3 agent swarm with agentic self reflection and auto error correcting and iterative improvement and quality assurance is going to be EPIC and free, trust me learn agents asap the time is coming be ready and be in the front

    • @bennie_pie
      @bennie_pie Місяць тому

      @@jarad4621 Yeah local agents that work while you sleep is the way. But it can be like hearding chickens.... time-consuming and you still get shit. Its that fine balance of an intelligent model, good prompting and good oversight. CrewAI seems decent or is it better to DIY?

  • @MeinDeutschkurs
    @MeinDeutschkurs Місяць тому

    I love your thoughts!
    In the moment each word takes flight, spoken, penned, or whispered into the night, we dream a future bright where models converse, their voices intertwine.
    An IoT daydream, woven from the threads of thought and machine's silent hum.

  • @ralph5768
    @ralph5768 Місяць тому

    Thanks Sam! I have been really tinkering about Summarization and this helps a LOT. Subscribe + like

  • @experter_analyser
    @experter_analyser Місяць тому

    I have always fine the videos very interesting and educative with different new thought. ❤

  • @puremajik
    @puremajik 5 днів тому

    Thank you this was very instructive. Can you recommend the best libaries for : 1) sectioning a document based on topic changes, 2) summarizing each section while maintaining contextual continuity and coherence, and 3) combining the summaries into a cohesive final summary?
    I'm thinking something like transformers (Hugging Face), spaCy, Gensim, pandas?

  • @micbab-vg2mu
    @micbab-vg2mu Місяць тому

    Great video - thank you - )

  • @SonGoku-pc7jl
    @SonGoku-pc7jl Місяць тому

    i like so much all video :) thanks!!!

  • @TheMelo1993
    @TheMelo1993 Місяць тому

    Great content 👍! Do you have any suggestions on how to implement this? Or a repo?

  • @SergioMunozGonzalez
    @SergioMunozGonzalez Місяць тому

    ty so much for the video sam, do you have any implemementation of this new summarization method?
    Thank you in advance

  • @stephaneleroi8506
    @stephaneleroi8506 Місяць тому

    Exellent. Do you know where the summerization with sections and full document in each section is implemented ?

  • @comfixit
    @comfixit Місяць тому +2

    Content and commentary was top notch, thank you for this video. An area for improvement is that you way overused the video B-Roll. First half of the video was kind of off-putting. Last half of the video the B-Roll was all good as it related to the subject. Example you are talking about Anthropic family of models and you show logos of Anthropic, pricing charts, performance charts etc... This is great stuff. But at the beginning you are talking and we are seeing animations of Robots with a sticker that says Hello. That doesn't work. I would rather see a talking head in those cases if you don't have B-Roll that is strongly related to the content.
    Just a personal preference but very much enjoyed the video content.

  • @willjohnston8216
    @willjohnston8216 Місяць тому

    Another great video. Sam have you found any methods for having the LLM spend more time on the analysis. The results I'm getting seem to be generic and something summarized from the web. I'd like to find a way to force more thinking through the problem set.

    • @samwitteveenai
      @samwitteveenai  Місяць тому

      This is a really good question. I think there are at least 2 paths to this. 1. is better alignment training where it can push back and clarify things better. A version of this (perhaps not the best version eventually) will probably come in the next OpenAI model on Monday. This kind of clarification in analysis is a very important one for Self Recursive Learning. This is something I have been running a lot of tests on and testing some unreleased models with but no amazing results I can talk about yet. 2. You can do something similar by prompting from multiple angles etc. eg have 1 prompt that rewrites multiple questions or angles of analysis. This is a bit of what the summarization prompts do in the app I show.

  • @ralph5768
    @ralph5768 Місяць тому

    Do you have a code example for this new type of summarisation?

  • @murilocurti1474
    @murilocurti1474 Місяць тому

    Great explanation! As usual 😃 Do you think it’s possible to do the same process of sectioning using gpt3.5 turbo?

    • @samwitteveenai
      @samwitteveenai  Місяць тому +1

      Yes but Haiku, Llama3 and another model coming out next week are better than 3.5 for this.

    • @murilocurti1474
      @murilocurti1474 Місяць тому

      @@samwitteveenai Thanks!!!

  • @janalgos
    @janalgos Місяць тому

    My concern with smaller models is the relatively higher hallucination. What has your experience been with Haiku when it comes to hallucination?

    • @jarad4621
      @jarad4621 Місяць тому

      Agentic patterns reflection review, iterative improvement, qa agents collaboration, one master opus overseer to manage, etc this will solve all your concerns about quality and still be super cheap

    • @samwitteveenai
      @samwitteveenai  Місяць тому +1

      I don't think the hallucinations are that much more of a problem. never use an LLM for facts, use the context for that. The advantage with the cheaper calls is you can do self reflection etc. to double check these.

    • @janalgos
      @janalgos Місяць тому

      @@samwitteveenai would be neat to see a tutorial on how to use those techniques to reduce instances of hallucinations and improve overall response quality for the smaller models

  • @rajesh_kisan
    @rajesh_kisan Місяць тому

    Can you share the code, or prompts at least? I tried implementing it but faced challenge with creating sections.
    I'm using Llama 8B in my local, and also tried Llama 70B.
    If you can share it, it'd be great help.

  • @123arskas
    @123arskas Місяць тому

    I still can't differentiate the "New Summarization System" you talked about VS "Refine Method".
    Refine tends to keep the context too of each chunk.
    The entire video felt like a promotional Ad for "Haiku"

    • @samwitteveenai
      @samwitteveenai  Місяць тому

      this is quite different in that you can't do refine parallel you have to queue and wait. regarding the ad for haiku I do think it is in class of its own until new models get announced next week.

  • @josephroman2690
    @josephroman2690 Місяць тому

    I would very much like contribute to this project, if it is possible if not, at least would like to be one of the testing users

  • @mickelodiansurname9578
    @mickelodiansurname9578 Місяць тому

    @Sam Witteveen Has anyone ever told you that you are the spitting image of the Poker player Daniel Negreanu?

  • @dhrumil5977
    @dhrumil5977 Місяць тому

    Download 😅