Criminally underrated channel. One of the absolute best AI/LLM UA-cam channels that somehow only has 55K subs?!? Thank you for all of your hard-earned insights; very useful to jumpstart our own projects.
I'm listening to your video for months now and as I'm transitioning towards building LLM apps myself, I'm really grateful for the insights you've been sharing all along. Its invaluable to learn from someone who's been building realistic, real world products based on LLMs while following on the research closely.
Thanks this is exactly what I was aiming to do with the channel, I have never desired to be a "youtuber" I started to show some friends so cool stuff with LLMs and it took off. I try not to hype stuff just show what can be done etc with various models.
I have been reviewing many of your YT Videos and evaluating your many code examples. This video is certainly different in that it makes us think about how to transistion from the current state and applications of LLM's to new personalized and curated practical solutions -- especially by applying smaller, faster, lower-cost "variant" LLMs like Anthropic's Haiku ... I agree that we can find a "middle ground" between Sam A.'s two so-called "choices" !
Another excellent video. Thanks for pushing forward the practicalities of using the variety of models and services that could be appropriate for your project and where you are in the progress of your project.
This kind of content is really impressive. I was working on note taking app and trying to build a scalable app with team. Had alot of Challenge after watching your explanation I was able to relate to this.
Great idea. I did something with grouping sentences together for a topic (Louvain community detection algorithm), so that sentences with the same semantic meaning are grouped together. Working incredible great for books chapter by chapter summaries.The benefit is that topics what you called sections are grouped even if the topic is handled in later sections again. But in the end it was Map Reduce. So I am curious to see the result combined with your new system.
Your talk summarised my project...i too have been using Claude, except opus however my free API access ends in a few days and so in order to build something which could go live trading down to Haiku but with multiple iterations was just starting to dawnn and then boom you're solving issues or suggesting use cases I hadnt even considered! This video has been absolute gold - thank you
Yeah when Claude come out I didn't give a crap about the fancy models it was the use cases for the good but cheap.models like haiku and now llama 3 that excite me, low cost but still effective = $$$$$$. My phi3 agent swarm with agentic self reflection and auto error correcting and iterative improvement and quality assurance is going to be EPIC and free, trust me learn agents asap the time is coming be ready and be in the front
@@jarad4621 Yeah local agents that work while you sleep is the way. But it can be like hearding chickens.... time-consuming and you still get shit. Its that fine balance of an intelligent model, good prompting and good oversight. CrewAI seems decent or is it better to DIY?
I love your thoughts! In the moment each word takes flight, spoken, penned, or whispered into the night, we dream a future bright where models converse, their voices intertwine. An IoT daydream, woven from the threads of thought and machine's silent hum.
Thank you this was very instructive. Can you recommend the best libaries for : 1) sectioning a document based on topic changes, 2) summarizing each section while maintaining contextual continuity and coherence, and 3) combining the summaries into a cohesive final summary? I'm thinking something like transformers (Hugging Face), spaCy, Gensim, pandas?
Another great video. Sam have you found any methods for having the LLM spend more time on the analysis. The results I'm getting seem to be generic and something summarized from the web. I'd like to find a way to force more thinking through the problem set.
This is a really good question. I think there are at least 2 paths to this. 1. is better alignment training where it can push back and clarify things better. A version of this (perhaps not the best version eventually) will probably come in the next OpenAI model on Monday. This kind of clarification in analysis is a very important one for Self Recursive Learning. This is something I have been running a lot of tests on and testing some unreleased models with but no amazing results I can talk about yet. 2. You can do something similar by prompting from multiple angles etc. eg have 1 prompt that rewrites multiple questions or angles of analysis. This is a bit of what the summarization prompts do in the app I show.
Content and commentary was top notch, thank you for this video. An area for improvement is that you way overused the video B-Roll. First half of the video was kind of off-putting. Last half of the video the B-Roll was all good as it related to the subject. Example you are talking about Anthropic family of models and you show logos of Anthropic, pricing charts, performance charts etc... This is great stuff. But at the beginning you are talking and we are seeing animations of Robots with a sticker that says Hello. That doesn't work. I would rather see a talking head in those cases if you don't have B-Roll that is strongly related to the content. Just a personal preference but very much enjoyed the video content.
Agentic patterns reflection review, iterative improvement, qa agents collaboration, one master opus overseer to manage, etc this will solve all your concerns about quality and still be super cheap
I don't think the hallucinations are that much more of a problem. never use an LLM for facts, use the context for that. The advantage with the cheaper calls is you can do self reflection etc. to double check these.
@@samwitteveenai would be neat to see a tutorial on how to use those techniques to reduce instances of hallucinations and improve overall response quality for the smaller models
Can you share the code, or prompts at least? I tried implementing it but faced challenge with creating sections. I'm using Llama 8B in my local, and also tried Llama 70B. If you can share it, it'd be great help.
I still can't differentiate the "New Summarization System" you talked about VS "Refine Method". Refine tends to keep the context too of each chunk. The entire video felt like a promotional Ad for "Haiku"
this is quite different in that you can't do refine parallel you have to queue and wait. regarding the ad for haiku I do think it is in class of its own until new models get announced next week.
Criminally underrated channel. One of the absolute best AI/LLM UA-cam channels that somehow only has 55K subs?!? Thank you for all of your hard-earned insights; very useful to jumpstart our own projects.
thanks for the kind words.
So fully agree!
I love the part of this video that goes over the types of summarization. That part could be a video on its own.
I'm listening to your video for months now and as I'm transitioning towards building LLM apps myself, I'm really grateful for the insights you've been sharing all along.
Its invaluable to learn from someone who's been building realistic, real world products based on LLMs while following on the research closely.
Thanks this is exactly what I was aiming to do with the channel, I have never desired to be a "youtuber" I started to show some friends so cool stuff with LLMs and it took off. I try not to hype stuff just show what can be done etc with various models.
I have been reviewing many of your YT Videos and evaluating your many code examples.
This video is certainly different in that it makes us think about how to transistion from the
current state and applications of LLM's to new personalized and curated practical solutions
-- especially by applying smaller, faster, lower-cost "variant" LLMs like Anthropic's Haiku ...
I agree that we can find a "middle ground" between Sam A.'s two so-called "choices" !
Now, going back to review your earlier "Mastering Haiku Video" !
Another excellent video. Thanks for pushing forward the practicalities of using the variety of models and services that could be appropriate for your project and where you are in the progress of your project.
This kind of content is really impressive. I was working on note taking app and trying to build a scalable app with team.
Had alot of Challenge after watching your explanation I was able to relate to this.
You single-handedly explain literally everything someone needs. Thank you so so much for what you do ❤
Great idea. I did something with grouping sentences together for a topic (Louvain community detection algorithm), so that sentences with the same semantic meaning are grouped together. Working incredible great for books chapter by chapter summaries.The benefit is that topics what you called sections are grouped even if the topic is handled in later sections again. But in the end it was Map Reduce. So I am curious to see the result combined with your new system.
i could not find the link to the code for the sumarization app being talked about in the video? would be interesting to see flow
Your talk summarised my project...i too have been using Claude, except opus however my free API access ends in a few days and so in order to build something which could go live trading down to Haiku but with multiple iterations was just starting to dawnn and then boom you're solving issues or suggesting use cases I hadnt even considered! This video has been absolute gold - thank you
Yeah when Claude come out I didn't give a crap about the fancy models it was the use cases for the good but cheap.models like haiku and now llama 3 that excite me, low cost but still effective = $$$$$$. My phi3 agent swarm with agentic self reflection and auto error correcting and iterative improvement and quality assurance is going to be EPIC and free, trust me learn agents asap the time is coming be ready and be in the front
@@jarad4621 Yeah local agents that work while you sleep is the way. But it can be like hearding chickens.... time-consuming and you still get shit. Its that fine balance of an intelligent model, good prompting and good oversight. CrewAI seems decent or is it better to DIY?
I love your thoughts!
In the moment each word takes flight, spoken, penned, or whispered into the night, we dream a future bright where models converse, their voices intertwine.
An IoT daydream, woven from the threads of thought and machine's silent hum.
Thank you this was very instructive. Can you recommend the best libaries for : 1) sectioning a document based on topic changes, 2) summarizing each section while maintaining contextual continuity and coherence, and 3) combining the summaries into a cohesive final summary?
I'm thinking something like transformers (Hugging Face), spaCy, Gensim, pandas?
Thanks Sam! I have been really tinkering about Summarization and this helps a LOT. Subscribe + like
Exellent. Do you know where the summerization with sections and full document in each section is implemented ?
Brilliant video Sam! 🤗 Great job! Learned a ton!
ty so much for the video sam, do you have any implemementation of this new summarization method?
Thank you in advance
Great content 👍! Do you have any suggestions on how to implement this? Or a repo?
Another great video. Sam have you found any methods for having the LLM spend more time on the analysis. The results I'm getting seem to be generic and something summarized from the web. I'd like to find a way to force more thinking through the problem set.
This is a really good question. I think there are at least 2 paths to this. 1. is better alignment training where it can push back and clarify things better. A version of this (perhaps not the best version eventually) will probably come in the next OpenAI model on Monday. This kind of clarification in analysis is a very important one for Self Recursive Learning. This is something I have been running a lot of tests on and testing some unreleased models with but no amazing results I can talk about yet. 2. You can do something similar by prompting from multiple angles etc. eg have 1 prompt that rewrites multiple questions or angles of analysis. This is a bit of what the summarization prompts do in the app I show.
Content and commentary was top notch, thank you for this video. An area for improvement is that you way overused the video B-Roll. First half of the video was kind of off-putting. Last half of the video the B-Roll was all good as it related to the subject. Example you are talking about Anthropic family of models and you show logos of Anthropic, pricing charts, performance charts etc... This is great stuff. But at the beginning you are talking and we are seeing animations of Robots with a sticker that says Hello. That doesn't work. I would rather see a talking head in those cases if you don't have B-Roll that is strongly related to the content.
Just a personal preference but very much enjoyed the video content.
Do you have a code example for this new type of summarisation?
I have always fine the videos very interesting and educative with different new thought. ❤
My concern with smaller models is the relatively higher hallucination. What has your experience been with Haiku when it comes to hallucination?
Agentic patterns reflection review, iterative improvement, qa agents collaboration, one master opus overseer to manage, etc this will solve all your concerns about quality and still be super cheap
I don't think the hallucinations are that much more of a problem. never use an LLM for facts, use the context for that. The advantage with the cheaper calls is you can do self reflection etc. to double check these.
@@samwitteveenai would be neat to see a tutorial on how to use those techniques to reduce instances of hallucinations and improve overall response quality for the smaller models
Great explanation! As usual 😃 Do you think it’s possible to do the same process of sectioning using gpt3.5 turbo?
Yes but Haiku, Llama3 and another model coming out next week are better than 3.5 for this.
@@samwitteveenai Thanks!!!
Can you share the code, or prompts at least? I tried implementing it but faced challenge with creating sections.
I'm using Llama 8B in my local, and also tried Llama 70B.
If you can share it, it'd be great help.
I still can't differentiate the "New Summarization System" you talked about VS "Refine Method".
Refine tends to keep the context too of each chunk.
The entire video felt like a promotional Ad for "Haiku"
this is quite different in that you can't do refine parallel you have to queue and wait. regarding the ad for haiku I do think it is in class of its own until new models get announced next week.
Great video - thank you - )
@Sam Witteveen Has anyone ever told you that you are the spitting image of the Poker player Daniel Negreanu?
I would very much like contribute to this project, if it is possible if not, at least would like to be one of the testing users
i like so much all video :) thanks!!!
thanks!
Download 😅
What?
@@JacobAsmuth-jw8uc the video haha
@@dhrumil5977 ah possibility of getting it deleted?