HOW to Make Conversational Form with LangChain | LangChain TUTORIAL

Поділитися
Вставка
  • Опубліковано 4 гру 2024

КОМЕНТАРІ • 75

  • @tomjerry5144
    @tomjerry5144 Рік тому +1

    This is what I had found for a long time! Thank you for your sharing.

  • @jayhu6075
    @jayhu6075 Рік тому +3

    What a great example from a Openai functions, hopefully more other examples from this stuff. Many thanks.

  • @shrvn110
    @shrvn110 Рік тому +1

    I hope you get everything you want in life! thank you for your videos Sam!

  • @timttttast9793
    @timttttast9793 Рік тому +1

    I love it!!! A huge thanks for sharing your knowledge Sam. One thing that's been on my mind lately - suppose a user initially introduces himself with a nickname, like Sam, and later wants to rectify it by saying, 'Apologies, but my actual name is Samuel, not Sam.' Is there an efficient way to manage this within the system and keep the other answers that were already given? I'm genuinely curious to learn more about handling such situations.

  • @matthewmansour3295
    @matthewmansour3295 Рік тому

    Pure gold. Thanks for making this.

  • @lughinoo
    @lughinoo Рік тому +3

    Great video. I would love to see an alternative way of conversational form using open source models

    • @trulymittal
      @trulymittal Рік тому

      Did you find the alternative way using agents or something else as Sam said in the video 1:00

  • @ghrasko
    @ghrasko Рік тому +4

    Thanks, this was extremely useful! You emphasized that this is still a memory-less version, but because of this, this is really limited, and I don't know yet how to build from this.
    I should collect for example a date from the user. In the prompt, I can inject the current date, and thus the AI would be able to sesolve input like "this Friday" or similar. However, as this is memory-less, the parser chain will not be aware of the prompt with the current date or any other contextual info about the date.
    I am new in LangChain, so any hint on how to proceed, would be appreciated.

  • @ChrisadaSookdhis
    @ChrisadaSookdhis Рік тому +1

    This use case is similar to one I had been considering for a while. When companies put contact drop form on the web, the prevailing wisdom is to keep the form as short as possible, less you risk turning users away. But marketers always want to have more info, and we know SOME users are OK to share them.
    My idea is to have conversational chatbot that tries to collect additional data fields after contact form submission. The bot would collect only as much as users are happy to share, then stop and add the gathered info to the previously submitted form. If users do not want to, they don't have to share anything more. Win-Win.

    • @samwitteveenai
      @samwitteveenai  Рік тому

      Certainly can do that especially if you took out the ask_for part and just had some more generic prompts etc.One main part was I wanted to show that you don't probably want the filtering part on even utterance, just on ones you think will be useful etc.

  • @thequantechshow2661
    @thequantechshow2661 Рік тому +3

    This is GOLD

  • @kennethleung4487
    @kennethleung4487 Рік тому

    Great work, Sam! Super useful

  • @joey424242
    @joey424242 4 місяці тому

    This is great!
    What would you do in a case where I want to make a sort of chatbot quiz that asks questions in a certain order. The questions will either be multiple choice or answered by free text.
    Meaning the LLM would either need to display the multiple choice options and then record it the user chose the right one. Or the LLM would receive a free text answer and then grade the user on how close it is to the actual textual answer.
    The app will need to keep score and then finally grade the user.
    Would you use agents or functions here?

  • @Mrbotosty
    @Mrbotosty Рік тому

    We can even have validations on here, check if the email is valid and ask again if not! Its a great usecase for LLMs

  • @shreyasharma6074
    @shreyasharma6074 7 місяців тому

    This is amazing! Thank you so much!

  • @kilopist
    @kilopist 7 місяців тому

    Amazing Tutorial Sam! How could I give the user the option to ask clarifying questions? I guess memory In the ask for chain?

  • @yasminesmida2585
    @yasminesmida2585 5 місяців тому

    Great video thank you very much. What is the next step after creating the two chains? Should I create a global function to call them, develop an API, or consider another approach?

    • @samwitteveenai
      @samwitteveenai  5 місяців тому

      really depends on what you want to do with it

  • @davidw8668
    @davidw8668 Рік тому

    Thanks. This is pretty cool for all sorts of interactive content and lead generation but could also be imagined for personalised experiences.

    • @samwitteveenai
      @samwitteveenai  Рік тому +2

      Yes totally one use case that I have been working on that I used something like that was exactly lead gen.

  • @VeeDCreate
    @VeeDCreate 8 місяців тому +1

    Most of this is deprecated now. Run vs Invoke changes a lot of things. The function create_structured_output_runnable should be used instead of create_tagging_chain_pydantic. Plus, Pydantic format is different too (dict issues).

    • @samwitteveenai
      @samwitteveenai  8 місяців тому +1

      yes this is probably a year old or so now.

    • @VeeDCreate
      @VeeDCreate 8 місяців тому

      @@samwitteveenai I wanted to thank you for your content. Didn't do that in the earlier comment. Thank you for all the work you have put into these easy to understand tutorials.

  • @datupload6253
    @datupload6253 Рік тому +3

    Hi, sorry my question is not video related, but what language model would you recommend for training on a 24GB GPU from scratch if I have my own dataset and want to try from scratch? I don't want to use the pre-trained model because I want to have my own tokenizer and the dataset is not in English. I've played around a bit with GPT-NeoX with models with sub-1B parameter sizes, but I'm thinking that's a pretty old project and that maybe something more up to date (faster) has come out in past months. Thanks

    • @samwitteveenai
      @samwitteveenai  Рік тому

      You probably don't want to train an LLM from scratch, you need a few 100B tokens to get it to take off and most the LLMs that are decent were pretrained on 1T+ tokens. You want to Fine tune a model that has been made with a multi-lingual tokenizer. A number of the LLaMA open clones do have 50k tokenizers that are more multi lingual friendly. A lot of it depends on what the language is.

  • @gautamsk502
    @gautamsk502 Рік тому +1

    Hi @sam, I am getting the below error, when I executed the run command in the Collab which you shared. Any idea what could be the reason?
    ValidationError Traceback (most recent call last)
    in ()
    ----> 1 res = chain.run(test_string)
    ValidationError: 1 validation error for AIMessage
    content
    none is not an allowed value (type=type_error.none.not_allowed)

  • @svenandreas5947
    @svenandreas5947 Рік тому

    just brilliant .... I would like to ask how you would ensure that a user gives an indication that he is happy with the answer or not? I did play arround with adding this question to the prompt template (including memory) but was not so succesful. It works most of the time, but for whatever reason it is unable to deal with a simple yes / no answer. Looking forward for your tutorial. eye opener

  • @grandplazaunited
    @grandplazaunited Рік тому

    Thanks Sam. This made my day :)

  • @FreestyleTraceur
    @FreestyleTraceur Рік тому

    Very cool. Your videos are great.

  • @ChrisadaSookdhis
    @ChrisadaSookdhis Рік тому +1

    I was totally surprised to see Chatree Kongsuwan in one of the example. Do you know of the musician?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      P'Ohm is actually an old friend and I am in Bangkok this week, I also wanted to show that it would work with non 'western' names so I put his name in there. Cool that some one noticed :D

  • @VijayDChauhaan
    @VijayDChauhaan 5 місяців тому +1

    Please provide any alternative solution for using this with open source models

    • @samwitteveenai
      @samwitteveenai  5 місяців тому +1

      This kind of thing works with the Llama3 models often just need to play with the prompts a bit

    • @VijayDChauhaan
      @VijayDChauhaan 5 місяців тому

      ​@@samwitteveenaishould I use Ollama?

  • @vinsi90184
    @vinsi90184 Рік тому

    As always, starting with thanks. I am always catching up with your videos.
    I am curious about the use of field description in the pydantic class. What purpose does it serve? Is it picked by the LLM as well to understand what this means. Also curious about how to use few shot learning with the tagging chains you have created.

    • @samwitteveenai
      @samwitteveenai  Рік тому +2

      Yes exactly the descriptions help the LLM workout what to do.

  • @pmshadow
    @pmshadow Рік тому

    Thanks a lot! Fantastic content!!

  • @monirehebrahimi6141
    @monirehebrahimi6141 7 місяців тому

    Is there any chane to connect it with Llama 3 instead of open Ai?

  • @huislaw
    @huislaw Рік тому +1

    Nice, how can I use this properly as a tool for agent?
    I'm trying to create a tool for user who want to get in touch, that will collect name and email from user.
    When I tried to use it in an agent, it would trigger the tool if user said he want to get in touch,
    but when agent ask for name, and user reply with his name, the tool no longer get triggered. The agent simply said, hi, {name}, nice to meet you.

  • @carrillosanchezricardo2594
    @carrillosanchezricardo2594 7 місяців тому

    Is there a way to start this "flow" by a previous conversation. For example: User: Yes, I would like to book a flight? and then start this flow to ask. I think this could be possible "wrapping" this in a agent or something, am I wrong?

  • @RedCloudServices
    @RedCloudServices Рік тому

    Sam what if the use case has picklists which are dependent for example if the form has categories of fruits and vegetables and subcategory enumerators based on value of category?

  • @ZapCrafter
    @ZapCrafter Рік тому

    I kind of have this working as a feedback form but it's a bit clunky and every question is starting with "I need to gather some feedback...", so it's repeating the "explain you need to get some info" on every loop. I also had to include "not mentioned" in the 'empty' conditions. I can't help but think it needs memory to contextualise what it has already asked but this might be expensive with regards token usage. Maybe you could add a "yes" or a "no" as to whether the question is answered and then have the parser review the memory to pick out the answers from the conversation history? I've not had any luck with that yet though.

  • @vinsi90184
    @vinsi90184 Рік тому

    Hey Sam, can you also explain the reason for choosing tagging chain instead of extraction chains? I was trying this out with extraction and it gives back a list. The + there is that I can also collect information about a group of users rather than one. But it also creates extra bit of errors. Let's say the name of cars you own may have one or more. So, when I used extraction chain and I said I live in Melbourne, australia and I own a Volkswagon and Tesla, the extraction created two entries. One with my name etc and one car and the other blank entries elsewhere and another car. While if I use tagging chain both the cars with and are joined in one field.
    Happy to hear your thoughts on tagging vs extraction chains and respective pros and cons.

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      So tagging they seem to have made more for classification, eg sentiment analysis etc. It sounds like that you are doing is more extraction than classification so it makes sense to use that one more

  • @kenchang3456
    @kenchang3456 Рік тому

    Hi Sam, thank you for another great example to learn from :-) When you decided to use Pydantic, was it that you had experience with it and it fits this use case?

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      I have used Pydantic before with FastAPI etc but also even OpenAI apparently are using it for this, so it makes sense to use as it works really well.

  • @micbab-vg2mu
    @micbab-vg2mu Рік тому

    Great video - thank you:)

  • @rashedulkabir6227
    @rashedulkabir6227 Рік тому

    Make a video about new SDXL and how to run it on google colab.

  • @konradhylton
    @konradhylton Рік тому

    Thanks for sharing. Hi Sam, do you know of Langchain's Javascript components are also able to do this?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      They should be able to all this is just a different style of prompting. The part they may struggle with is the Pydantic class, my guess is they would use a generic object that is similar.

  • @caiyu538
    @caiyu538 10 місяців тому

    Great

  • @MeanGeneHacks
    @MeanGeneHacks Рік тому

    Is it possible to include Optional fields in the pydantic class and have the model include them if provided by the user, but not ask for them specifically? Every time I add an Optional field, it seems to break the chain.

    • @samwitteveenai
      @samwitteveenai  Рік тому

      Yes totally all the fields I had in there were optional, the only reason it asked was because I had a separate function for that. The Pydantic Class had nothing as required.

  • @MeanGeneHacks
    @MeanGeneHacks Рік тому

    I often end up with pydantic validations errors when I add other fields, such as "issue description" for a customer service bot. Any idea why that's happening?
    ---
    I understand there was a problem with your order. In order to assist you better, could you please provide me with a description of the issue you encountered?
    ---> my shipment didn't arrive.
    ---
    pydantic.error_wrappers.ValidationError: 4 validation errors for OrderDetails
    issue_description
    field required (type=value_error.missing)
    email
    field required (type=value_error.missing)
    phone
    field required (type=value_error.missing)
    order_number
    field required (type=value_error.missing)

    • @samwitteveenai
      @samwitteveenai  Рік тому

      Without playing with it a bit, my first guess would be your description of what the field should be is not detailed enough for the OpenAI Functions. Also try it with GPT-4 now that is available.

  • @yasminesmida-qc9ce
    @yasminesmida-qc9ce 5 місяців тому

    can i use any other open source llm model?which one do you recommend

    • @samwitteveenai
      @samwitteveenai  5 місяців тому +1

      I would go with Llama-3 now for an open source version or Mistral

    • @yasminesmida2585
      @yasminesmida2585 5 місяців тому

      @@samwitteveenai ​ thank you very much. What is the next step after creating the two chains? Should I create a global function to call them, develop an API, or consider another approach?

    • @yasminesmida2585
      @yasminesmida2585 5 місяців тому

      @@samwitteveenai is Llama ,Mistral better than gpt3.5 for this task?

    • @VijayDChauhaan
      @VijayDChauhaan 5 місяців тому

      ​@@yasminesmida2585are you able to recreate this with Llama 3? If so how? Did you use API or ollama or something else

  • @yasminesmida2585
    @yasminesmida2585 5 місяців тому

    plz Why did you use model='gpt-3.5-turbo-0613' for chain 1 and model='gpt-3.5-turbo', the default model, for chain 2?

    • @samwitteveenai
      @samwitteveenai  5 місяців тому

      they would have been the same (at the time of recording), one is a pinned version and one not

  • @ahmadzaimhilmi
    @ahmadzaimhilmi Рік тому

    I was thinking of a more complex conversation, like when you start a main conversation, then go have a side discussion about a sub topic, come to a conclusion, use the conclusion to chart the next step. Something like that.

    • @samwitteveenai
      @samwitteveenai  Рік тому +1

      Yes this will work, just plan out the possible paths. I will make a video of routers soon and they can be used for that.

    • @JermaineCheah
      @JermaineCheah Рік тому

      @@samwitteveenai Thanks for going into these depths, as many other content creators always stop at the chatbot with your docs..or conversation memory and what not. But what i am trying to achieve is very similar to what Ahmad Zaim mentioned as well.
      Looking to piece all the puzzles together and create a much more better conversational Chat Bot for inbound customer service.
      Will you still be doing intent based on the routing level? to handle cases whereby when a user is say in a refund juncture, filling in the conversational form but provides a message that is not what the refund juncture is expecting, and how you handle and route those.
      Really excited to see what you have in the pipeline. Keep up the great work, do you have a patreon?

  • @guanjwcn
    @guanjwcn Рік тому

    Is it really true that you mostly live in Singapore?

    • @samwitteveenai
      @samwitteveenai  Рік тому

      That is certainly true. Though I recorded this in Bangkok and am in BKK this week.

  • @Weberbros1
    @Weberbros1 Рік тому

    Hey man FYI this video seems to be blacklisted from showing up in youtube search results. Not sure what you did to piss off the algorithm lol but youtube hates this video for some reason.

    • @Weberbros1
      @Weberbros1 Рік тому

      nevermind, it shows up in search results again. No problem anymore

  • @rossanobr
    @rossanobr Рік тому

    JS videos please 🥹🥹

  • @蔡瀚緯-w4j
    @蔡瀚緯-w4j Рік тому

    Hi Sam, thank you for your amazing videos that have helped me a lot. I've learned a great deal about LangChain recently.
    I'm currently working on developing a tool that can process 30 to 50 hotel reviews at once. The goal is to classify the priority level of each review based on predefined rules, allowing hotel staff to quickly respond to complaint reviews.
    The rules may look like this:
    high_priority_standard = ["Unwilling to visit again", "Customer injured due to hotel", "Serious hygiene issues", "..."]
    medium_priority_standard = ["Price perception gap", "Unsatisfactory staff service", "..."]
    low_priority_standard = ["Issues that cannot be improved in the short term (location disadvantage, outdated hotel)", "Internet connection problems", "..."]
    My question is, which LangChain tool should I use if I want to automate and reliably process customer reviews each day? I tried using csvAgent, inputting 30 reviews, but it only gave me 4 outputs, and the quality of the outputs did not meet my expectations.
    I would appreciate it if you could provide me with some advice. Thank you!

    • @蔡瀚緯-w4j
      @蔡瀚緯-w4j Рік тому

      I solved the problem by template and using multiple input_variable. Thanks.
      But the output still unstable , LLM will missed 3~5 reviews.

    • @蔡瀚緯-w4j
      @蔡瀚緯-w4j Рік тому

      I have watched your excellent video about parsers on your channel, which helped me understand the functionality of pydantic. I am wondering if I can use pydantic to define the desired number of outputs from LLM. Currently, I have only seen pydantic being used to enforce the format of each output.
      For example, initially, I relied on len(data) to determine that the LLM output should match the number of input reviews. However, it doesn't always work well, as sometimes the LLM still outputs fewer results. Below is my original code, and I would appreciate your suggestions. Following is my code
      --------------------------------------
      #The priority classification rule in chinese
      priority_standards = [
      {"priority": "high_priority", "standards": ["不願意再光顧", "顧客因為飯店受傷", "重大衛生問題", "顧客個人財物丟失或被竊", "顧客隱私洩露","評論內容顯示顧客情緒極度憤怒"]},
      {"priority": "medium_priority", "standards": ["可以短期改善的問題(服務流程,動線問題....)","價格認知落差", "員工服務不滿意", "飯店設施故障", "房間清潔問題","評論內容顯示顧客情緒不太愉快"]},
      {"priority": "low_priority", "standards": ["無法短期改善的問題(地點不優、飯店老舊....)", "網絡連接問題", "飯店噪音問題","客觀來講並非飯店業者問題"]}
      ]
      llm_16k = ChatOpenAI(model_name='gpt-3.5-turbo-16k',temperature = 0)
      #Define Template rule
      prioriy_classify_prompt3 = PromptTemplate(
      input_variables=["text_input","priority_standard","review_count"],
      template="""
      To ensure that all the {review_count} input reviews are processed according to the provided rules,\
      please use the following instructions for each review:
      Extract the following information:
      priority:As an expert in public relations and crisis management for a five-star hotel,\
      please leverage your extensive experience to classify the priority of each review\
      based on the provided {priority_standard}.\
      The priority classification should yield one of three outcomes: high_priority, medium_priority, or low_priority.
      priority_reason:You will explain the reasons behind the priority classification.\
      Please provide a concise description, in Traditional Chinese (Taiwan), of your rationale, customer sentiments, severity of the situation, and other relevant factors.\
      Limit the explanation to 30 words to ensure brevity.
      key_fact:Please summarize the key facts of each review without including your own opinion\
      keep the key fact in short sentences and in Traditional Chinese (Taiwan) whenever possible.
      Make sure the number of final ouput is equal to {review_count}, and format as JSON with the following keys:
      key_fact
      priority
      priority_reason
      reviews: '''{text_input}'''
      """ )
      prioriy_classify_chain3 = LLMChain(llm=llm_16k,prompt=prioriy_classify_prompt3)
      prioritys3 = prioriy_classify_chain3.predict_and_parse(text_input = data,priority_standard =priority_standards,review_count = len(data) )
      print(prioritys3)