The REAL cost of LLM (And How to reduce 78%+ of Cost)

Поділитися
Вставка
  • Опубліковано 3 сер 2024
  • I want to give you step by step guide on how to reduce LLM cost by 70%, and unpack why it is costing so much now
    Get free HubSpot AI For Marketers Course: clickhubspot.com/xut
    🔗 Links
    - Join my community: www.skool.com/ai-builder-club...
    - Follow me on twitter: / jasonzhou1993
    - Join my AI email list: crafters.ai/
    - My discord: / discord
    - Inbox Agent: • After 7 days letting A...
    - Research Agent: • "Wait..this AI Agent d...
    - James Brigg on Agent Memory: www.pinecone.io/learn/series/...
    - Another video about details for LLM cost tracking: • How to reduce LLM cost...
    ⏱️ Timestamps
    0:00 How I burned $5000 on OpenAI
    2:41 My experience with AI Girlfriend Project
    6:01 HubSpot AI For Markter
    7:23 How to Reduce LLM Cost
    8:33 Method 1 - Finetune
    9:51 Method 2 - Cascade
    10:35 Method 3 - LLM Router
    12:47 Method 4 - Multi-Agent
    14:07 Method 5 - LLMLingua
    16:31 Method 6 - Optimise tool input/output
    17:16 Method 7 - Memory Optimisation
    19:26 LLM Monitor & Analytics
    20:09 Tutorial: Monitor & reduce 75% cost
    👋🏻 About Me
    My name is Jason Zhou, a product designer who shares interesting AI experiments & products. Email me if you need help building AI apps! ask@ai-jason.com
    #mixtral #gpt4turbo #gpt4 #ai #artificialintelligence #tutorial #stepbystep #openai #llm #chatgpt #largelanguagemodels #largelanguagemodel #bestaiagent #chatgpt #agentgpt #agent #autogen #autogpt
  • Наука та технологія

КОМЕНТАРІ • 155

  • @jasonfinance
    @jasonfinance 6 місяців тому +57

    I never got the point of setting up those LLM monitor before, but the step by step guide in the end showing how you use it & how it lead to real cost reduction is gold (70% is crazy!); Will try it out, thank you!

  • @que-tangclan5856
    @que-tangclan5856 6 місяців тому +17

    This is the best AI content I have seen all week. Thank you for this.

  • @kguyrampage95
    @kguyrampage95 6 місяців тому +12

    Bro that's crazyyy, I literally just wrote down notes on reducing costs in different approaches today. I was about to test them out and saw this video in my inbox. damn very on time.

  • @Joe-bp5mo
    @Joe-bp5mo 6 місяців тому +14

    Didn't realise the cost gap between GPT4 & Open source model like Mixtral is so big! 200x more expensive really change how I think of building LLM products;
    Thanks for sharing! will definitely try to optimise my LLM apps!

  • @agentDueDiligence
    @agentDueDiligence 6 місяців тому +23

    Hi Jason!
    another alternative to measure costs in your script is to simply use the chat completions information provided by the api of openai.
    every time you call the API, it will return the total tokens in the response json in the "usage" dictionary. That way, you can monitor & control your usage as well.

  • @ZacMagee
    @ZacMagee 6 місяців тому +4

    Love your content man. You have helped me really expand my knowledge and push my boundaries

  • @michaelwallace4757
    @michaelwallace4757 6 місяців тому +9

    A step by step build of an agent architecture would be invaluable! Thank you for the video.

  • @Ke_Mis
    @Ke_Mis 6 місяців тому +9

    Your content is just superb as always Jason!

  • @JimMendenhall
    @JimMendenhall 6 місяців тому +1

    Thanks for sharing your insights from your work. It's very helpful!

  • @holdingW0
    @holdingW0 5 місяців тому +1

    Excellent video. Subbed and hope you keep the content coming!

  • @leandroimail
    @leandroimail 6 місяців тому +2

    Tks very much for this video. I have been having problems with the cost of my agents. I will do this tips and clue that you gave. Thks again.

  • @gabrieleguo
    @gabrieleguo 5 місяців тому

    Thanks Jason, your content is always on point and very insightful. Keep it up man!

  • @MrTalhakamran2006
    @MrTalhakamran2006 5 місяців тому

    Thank you Jason for your hard work to put this together.

  • @the-ghost-in-the-machine1108
    @the-ghost-in-the-machine1108 6 місяців тому

    this was an intense, highly informative lecture. Thanks Jason, appreciate your work!

  • @gsolaich
    @gsolaich 5 місяців тому +1

    We were planning to build ai assistant kind apps but always pull back due to cost it incurs , this is a fabulous video that has given us a new direction to go ahead. Thanks a lot .... looking forward to see other videos

  • @GjentiG4
    @GjentiG4 6 місяців тому

    Great vid! Keep up the good work

  • @serenditymuse
    @serenditymuse 6 місяців тому +3

    Excellent. Most of his videos are but this one was especially useful to me.

  • @JohnByrneAus
    @JohnByrneAus 5 місяців тому

    Excellent video! I just ran into issues with memory for conversations and I really like the strategies you've offered in this. Thank you.

  • @RichardGetzPhotography
    @RichardGetzPhotography 6 місяців тому

    Excellent work Jason

  • @omarzidan6840
    @omarzidan6840 6 місяців тому

    We love you Jason. Thanks a lot!

  • @timgzl256
    @timgzl256 6 місяців тому +1

    Cette chaîne est la meilleure école existante à ce jour.
    Merci Jason

  • @vinception777
    @vinception777 6 місяців тому +1

    Thanks a lot, like James Briggs and some other, your content is outstandingly great. These are really important information that I need at work 🙏☺

  • @taylorthompson4212
    @taylorthompson4212 6 місяців тому +4

    This video came at the perfect time. Thank you

  • @aiforsocialbenefit
    @aiforsocialbenefit 6 місяців тому

    Excellent tutorial. Thank you!

  • @oscarcharliezulu
    @oscarcharliezulu 6 місяців тому +3

    Excellent video great to hear real world experience from a real Dev

  • @hidroman1993
    @hidroman1993 6 місяців тому +5

    "Comment if you want a video about this" your videos are so good I will click anyways ❤️

  • @richuanglin6824
    @richuanglin6824 6 місяців тому

    27 minutes of solid gold! Thanks Jason

  • @misterloafer5021
    @misterloafer5021 6 місяців тому +4

    Yes, please do a video on multi agent methods

  • @quickcinemarecap
    @quickcinemarecap 6 місяців тому +4

    00:05 Using autonomous sales agents led to unexpected high costs.
    02:11 AI startup costs fluctuate with usage
    06:12 Marketing teams are adopting AI for automation and hyper-personalized customer experiences.
    08:19 Using smaller models can reduce cost by multiple magnitudes.
    12:27 Customize router for cost reduction
    14:23 Using small models can significantly reduce the token and word count for large language models
    18:13 Reducing large language model costs
    19:55 Analyze token consumption for cost optimization
    23:18 Agent executor identifies cost breakdown and offers cost reduction strategies
    25:00 Using GPT-3.5 turbo and staff documents for detailed and cost-effective summarization.

    • @christopherd.winnan8701
      @christopherd.winnan8701 6 місяців тому

      Usman, what does it cost in terms of tokens to to run your summary AI? Does it use an open source model?

    • @quickcinemarecap
      @quickcinemarecap 6 місяців тому

      @@christopherd.winnan8701 0.05 for every 30 minutes of summary

    • @quickcinemarecap
      @quickcinemarecap 6 місяців тому

      @@christopherd.winnan8701 its gpt4 and cost me 30 cents for every 1 hour of summary

  • @clamhammer2463
    @clamhammer2463 6 місяців тому +1

    I had this idea for LLM routing a while back and wondered why nobody has done it. I figured there was some sort of information I didnt have that was stopping it.

  • @betun130
    @betun130 2 місяці тому

    Superb content Jason, I will highly recommend your videos to everyone getting their hands dirty with LLMs. I am gonna try some of these myself. It's a shame I didn't build it before because something like the AI router occurred to me but I do not have the patience to implement these.

  • @zhubarb
    @zhubarb 6 місяців тому

    This is a very good video. Appreciate it.

  • @jim02377
    @jim02377 6 місяців тому +1

    Excellent video! Saved me lots of time trying to figure that out. Keep up the great work!

  • @mattbegley1345
    @mattbegley1345 4 місяці тому

    Excellent!👍 Applying that Assistant Hierarchy to your Sales Agent would be a good video.

  • @Beloved_Digital
    @Beloved_Digital 6 місяців тому +1

    I am a newbie when it comes to build AI powered apps.
    Although i don't fully understand all you say because i am still learning the basics all i can say is Thank you for sharing this valuable contents with us

  • @nexusinfosec
    @nexusinfosec 6 місяців тому +1

    Yes please for a video deepdiving into agent architecture for autogen

  • @ivant_true
    @ivant_true 6 місяців тому

    man, super useful video, thanks !

  • @nicechannel9720
    @nicechannel9720 6 місяців тому +1

    A great dive into the cost of Al models as it is hard to find related content. Can you do a video about how much Openai is roughly spending on computaion cost and also how this constraint will hinder the adaptation of these models in the enterprise space. Great job man 👍

  • @rishi8413
    @rishi8413 6 місяців тому

    really love your videos, are there any packages or libraries to use these 7 methods you discussed

  • @matten_zero
    @matten_zero 6 місяців тому

    I'm taking all of this for my startup. This is the way and creates a moat for you assuming you hold on to the weights afterwards

  • @archerkee9761
    @archerkee9761 4 місяці тому

    nice video, thanks!

  • @sewingsugar9892
    @sewingsugar9892 6 місяців тому +1

    This channel is so underrated

  • @SophieCheung
    @SophieCheung 5 місяців тому

    thanks for your video! :)

  • @matten_zero
    @matten_zero 6 місяців тому +17

    This is the biggest flex ever! 💪I can only dream to be as cool of an AI Engineer as you. I thought building a digital agent with automatic voice that can do RAG was cool.
    There are levels to this game an Jason is on a whole different world. Thanks for posting these videos. It's educational, funny and inspirational for me.

  • @tirthb
    @tirthb 4 місяці тому

    Wow, super practical tips.

  • @RolandoLopezNieto
    @RolandoLopezNieto 6 місяців тому

    Thank you very much for the video

  • @JashAmbaliya
    @JashAmbaliya 6 місяців тому

    Really helpful content

  • @goutamkelam6117
    @goutamkelam6117 6 місяців тому

    🎯 Key Takeaways for quick navigation:
    19:51 💡 *Analyze token consumption for cost optimization.*
    20:19 💻 *Install Lens Smith and set up.*
    21:01 🛠️ *Setup environment variables for connection.*
    21:43 📊 *Implement tracking methods for insights.*
    22:12 📚 *Utilize Lanching for research projects.*
    23:06 📝 *Log project activities for monitoring.*
    24:03 💰 *Analyze token costs for optimization.*
    24:31 📉 *Reduce GPT-4 usage for cost savings.*
    25:12 📄 *Implement content summary for efficiency.*
    26:09 ✂️ *Optimize script tool for better results.*
    Made with HARPA AI

  • @SergiySev
    @SergiySev 5 місяців тому

    such a good video!

  • @andresluna2517
    @andresluna2517 6 місяців тому

    Excellent content

  • @oryxchannel
    @oryxchannel 5 місяців тому +1

    See groundswell paper dated Jan 29th 2024: "Towards Optimizing the Costs of LLM Usage." These Indian authors are gonna kick some serious but regarding costs. I see the FrugalGPT paper in your video too. Thank you for offering real world case scenarios of your personal experience. Edit: This video is a trove on frugal LLM building. Awesome job!

  • @hackerborabora7212
    @hackerborabora7212 6 місяців тому

    We love your videos 🎉❤

  • @Max-zy2ie
    @Max-zy2ie 6 місяців тому +3

    When building multi agent orchestration systems, what is your preferred stack? Do you use langchain, autogen or just native APIs?

  • @xugefu
    @xugefu 6 місяців тому +1

    Thanks!

  • @bhaumiks.6543
    @bhaumiks.6543 4 місяці тому

    I am intrested learning about architecture. By the way, Amazing videos...

  • @MaximIlyin
    @MaximIlyin 6 місяців тому +1

    Great video, thanks!
    Why not store Agent conversation memory in embeddings and retrieve only relevant (by cosine similarity) to the current user query as a context?
    (Like a RAG for conversation memory)

  • @breathandrelax4367
    @breathandrelax4367 5 місяців тому

    Hi Jason,
    thank you for the video impressive work !
    while building the app what do you think of using if /else chain that will reroute to a particular llm ?

  • @evermorecurious91
    @evermorecurious91 5 місяців тому

    BRO, this is gold!!!

  • @yazanrisheh5127
    @yazanrisheh5127 6 місяців тому +1

    Hey Jason. You said at around minute 9 that we should use a model like GPT 4 to get data and then use that to fine tune but how much data do we need so that our fine tuned mistral model will be performing as good as gpt 4?

  • @ibrahimozgursucu3378
    @ibrahimozgursucu3378 6 місяців тому

    Even as a noob who can barely call himself a bash script kid, I'm embarrassed to say that the LLM Router concept blew my mind.
    I've run countless thought experiments in my head, even trying to draw parallels between brains and AI for inspiration,
    yet it never occurred to me that just like different area's perform different tasks in the brain which receive data accordingly, maybe an AI should have something like an LLM Router.
    It's so obvious yet brilliant.

  • @Ryan-yj4sd
    @Ryan-yj4sd 6 місяців тому

    Fine tuning for token reduction is a key technique I’ve used

  • @spicer41282
    @spicer41282 6 місяців тому

    Thank you Jason!
    I 👍2nd the motion for your:
    EcoAssistant 📹Video.!!

  • @seamussmyth2312
    @seamussmyth2312 6 місяців тому

    Superb 🏆

  • @kernsanders3973
    @kernsanders3973 4 місяці тому

    Think what would also work in terms of the agents scenario, in real life there is a moderator between huge disagreements with employees. Which would be their team lead. So the if a disagreement occurs where its multiple replies the TL needs to step in and lay down the rules and law for work and code of conduct and make a final decision on the disagreement.

  • @didyouknowtriviayt
    @didyouknowtriviayt 5 місяців тому

    You can also use natural language processing lemmatization to convert words into their lemma, or root word, to reduce the content "weight" or token count. You don't need the extra word garbage like suffixes. LLMs do a good job of extracting meaning from lemmatized content. Its like you are cutting through the syntactic sugar of the English language and getting to the root meaning and not wasting the LLMs time

  • @blackswann9555
    @blackswann9555 5 місяців тому

    Very interesting

  • @alibahrami6810
    @alibahrami6810 6 місяців тому

    Great video! Could you please make a video about putting an llm to the production, with concerns of parallellism, memory and gpu usage, load ballancing, effective software artitechure? How to scale up a local llm to be accessible world wide like gpt, with optimizing memory and resources in mind? THanks

  • @mosca204
    @mosca204 6 місяців тому

    So you inadvertently built a massive email warm-up. At least you will not be flagged as spam for a long time ahah.
    PS: It would be great to see a sales agent video soon ;)

  • @ryzikx
    @ryzikx 6 місяців тому

    ive always wanted to do this but im too dumb and lazy lmao, good to see someone like you is doing it

  • @prestonmccauley43
    @prestonmccauley43 6 місяців тому

    What other services have you found for deployment that are cost friendly? You have to install vms containers and more

  • @Tanvir1337
    @Tanvir1337 6 місяців тому +2

    Mixtral 8x7b*

  • @matten_zero
    @matten_zero 6 місяців тому +1

    I've done that before @18:46. It works pretty well esp when you combine with SPR (popularized by David Shapiro).

  • @rchaumais
    @rchaumais 5 місяців тому

    Many thanks for your useful video.
    Have you evaluated Nemo from Nvidia ?

  • @roke4025
    @roke4025 6 місяців тому

    🎉 Brilliant mate. I’m a fiend for compressing costs to maximum, but I found out that during cost compression some models (eg. Mistral tiny) are not able to make proper custom tool calls and are unable to extract out the JSON response result from the tool call. As soon as a switch is made to an OpenAI model fine tuned to recognise json schemas, tool calls work perfectly (in Flowise). Is that why you persist in using OpenAI models in your calls? As opposed to using a Mistral or Llama inference? So you can achieve the right tool calling?

  • @CoriolanBataille
    @CoriolanBataille 6 місяців тому

    Thankyou so much for sharing you knowldge with us, it’s extremely useful and inspiring (at least for me as a dev that is working on cashing up on AI) By the way, what to you think of MemGPT?

    • @AIJasonZ
      @AIJasonZ  6 місяців тому +1

      Thanks! MemGPT is super interesting architecture, I haven’t really run it in product though, do you know any applications build with MemGPT?

    • @CoriolanBataille
      @CoriolanBataille 6 місяців тому

      Yeah I think there is a lot of potential, I’m not aware of any commercial application using it tho, but going to test it in some projects@@AIJasonZ

  • @subratnayak2682
    @subratnayak2682 6 місяців тому

    For the cascade method how will measure the score for each new question while on the production?

  • @funny_tiger11
    @funny_tiger11 6 місяців тому

    Is portkey ai an example of opensource LLM Router? ( I have not used it, but it seems to allow the capability for what you spoke about limitation of Neutrino AI

  • @prestonmccauley43
    @prestonmccauley43 6 місяців тому

    If you use the big ones like azure bedrock etc, they are so expensive on deploy with the compute

  • @headrobotics
    @headrobotics 6 місяців тому

    For fine tuning a small model from a large one, what about OpenAI terms of service? Has it changed to allow?

  • @xonack
    @xonack 6 місяців тому +1

    ecoassistant video please!

  • @ianalmeida4759
    @ianalmeida4759 6 місяців тому

    Reminds me of that scene in Silicon Valley where AI Dinesh speaks to AI Gilfoyle

  • @tks5182
    @tks5182 6 місяців тому

    Would appreciate a course or even a comment on what knowledge you need and what concepts you should know to be an AI & ML Engineer

  • @kguyrampage95
    @kguyrampage95 6 місяців тому +3

    at 8:05 you made an obvious mistake with the maths, your probably meant the cheapest model not mistral. since it would 50x cheaper not 214x cheaper

    • @AIJasonZ
      @AIJasonZ  6 місяців тому +2

      Ahh I highlight the wrong row, if should be mistral 7b, thanks for spotting this mate!

    • @kguyrampage95
      @kguyrampage95 5 місяців тому

      @@AIJasonZ Hey this video was great by the way! I am learning to make video to showcase some my experiments and I am hoping I can produce as much quality as you!

  • @user-ow8qd1sq7n
    @user-ow8qd1sq7n 5 місяців тому

    Prompt Engineer and LLM developer here.
    GPT4 32k is not the most powerful model, it is outclassed by GPT-4- preview-1106 and now GPT-4-preview-0125 which is even better.
    Not only is GPT-4-32k worse, it is also 6 times more expensive ! ($0.06/1k token for GPT 4 32k, and only $0.01/1k token for gpt-4-preview-0125)

  • @savire.ergheiz
    @savire.ergheiz 5 місяців тому +1

    Sorry to say this but almost all of your mentioning here are based on bad planning and rushing things out without thinking of the after effects.
    Its not just in AI. Its always been like that since forever if you tried to follow hype.
    Unless you got backed by big companies or investor planning way ahead with costs is always be a must.

  • @450aday
    @450aday 5 місяців тому +1

    you really should not use Ai's for multiplication, use a calculator. Find Tool Ai is an important Ai to save money. Button Ai is another good one.

  • @andres.yodars
    @andres.yodars 6 місяців тому

    lovely

  • @salookie8000
    @salookie8000 5 місяців тому

    Genius

  • @WaxN-ey6vj
    @WaxN-ey6vj 6 місяців тому

    Since GPT development is rapid, I think making fine-turning model is risky due to time consuming.
    The cost won’t be a big deal as Open AI constantly develops a new model and reduces the cost of previous one.

  • @mjkbird
    @mjkbird 6 місяців тому

    Isn't it against OpenAI's ToS to use the output as training data?

  • @nufh
    @nufh 6 місяців тому

    I managed to build the clone for AI GF for free now with local LLM.

  • @mohamedaminehamza
    @mohamedaminehamza 5 місяців тому

    it's Real life silicon Valley serie Scenario where two ai start talking to each lol.

  • @nikilragav
    @nikilragav 4 місяці тому

    14:56 - seems like this might not work well for needle in haystack approaches, right? Because if you want to ask "what departments were present at this session?" the bigger model does not have an answer to that in its context. You'd need some kind of vector similarity check first to assess whether the answer might even exist in the context given to the bigger model? And if not, give the whole thing? Or at least do some RAG-style look up and fetch? I'm not so sure how well RAG can do needle in haystack searching though. Seems highly dependent on your embedding model, and openAI doesn't have an option to use GPT4 embedding space, right?

  • @vinitvsankhe
    @vinitvsankhe 6 місяців тому

    But what if I need an AI that needs to be trained with one data snapshot?

  • @jakobbourne6381
    @jakobbourne6381 5 місяців тому

    Stay ahead in the competitive market by leveraging the unique capabilities of *Phlanx's Caption Generator* , which not only saves you valuable time but also contributes directly to revenue growth through increased customer engagement.

  • @simonmassey8850
    @simonmassey8850 5 місяців тому

    companies put in “fair usage” clauses to cap or throttle users. ask you smart “sales agent” about that idea.

  • @noodjetpacker9502
    @noodjetpacker9502 5 місяців тому

    I don’t know if this is a stupid question but why doesn’t ChatGPT already implement these features for themselves? Or do they already do these?

  • @the_real_cookiez
    @the_real_cookiez 6 місяців тому

    How come you don't use state of the art open source LLM models? It should be strong enough right?

    • @helix8847
      @helix8847 6 місяців тому

      The current issue with them is calling the Tool. Maybe Code LLama 70b could do it now.

  • @noahgottesla3439
    @noahgottesla3439 6 місяців тому

    This is the core technic with rabbit r1

  • @user-ti7fg7gh7t
    @user-ti7fg7gh7t 4 місяці тому

    That's not true that's a 'new type of cost'. Traditional software companies always need to care and look out for API costs. Anyone who used gdloud or aws racked up some unexpectedly high API costs one way or the other. You can also set some spending limits in your API settings on OpenAI platform.