Build a Customer Support Bot | LangGraph

Поділитися
Вставка
  • Опубліковано 8 чер 2024
  • Build a Customer Support Chatbot | LangGraph
    In this tutorial, we create a travel assistant chatbot using LangGraph, demonstrating reusable techniques applicable to building any customer support chatbot or AI system that uses tools, supports many user journeys, or requires a high degree of control. #AI #LangGraph #llm
    We start by building a simple travel assistant and progressively add complexity to better support advanced capabilities:
    1. Zero-Shot Tool Executor: In the first part, we develop a simple agent with an LLM and tools, showing the limitations of this flat design for complex experiences.
    2. User Confirmation: In the second part, we add user confirmation before the agent takes any sensitive actions, giving the user more control but at the cost of a less autonomous experience.
    3. Conditional Interrupts: In the third part, we split tools into "safe" and "sensitive" categories, only requiring user confirmation on sensitive actions. This improves the user experience while maintaining an appropriate level of control.
    4. Specialized Workflows: In the fourth part, we separate user journeys into specific "skills" or "workflows". This allows optimizing prompts and tools for each intent, leading to a more reliable and tailored user experience.
    By the end of this tutorial, you'll understand key principles for designing customer support chatbots, balancing expressiveness and control to create delightful user experiences.
    Chapters:
    00:00 Introduction
    01:15 Background: Chatbot Design Challenges
    02:38 Tutorial Roadmap: From Simple to Complex
    06:50 Set up Development Environment
    10:04 Part 1: Designing a Simple Zero-Shot Agent
    16:08 Part 2: Add User Confirmation
    19:37 Part 3: Conditional Interrupts
    25:10 Zero-shot Design Limitations and Solutions
    27:28 Part 4: Specialized Workflows (Intro)
    29:46 Workflow Design and Optimization
    38:44 Testing out + Review in LangSmith
    42:57 Reflecting on the Tutorial: From Simple Agent to Specialized Workflows
    46:50 Conclusion and Future Directions
    Additional Resources:
    - Tutorial Code: langchain-ai.github.io/langgr...
    - LangGraph Documentation: langchain-ai.github.io/langgr...
    / whinthorn
  • Наука та технологія

КОМЕНТАРІ • 41

  • @donb5521
    @donb5521 Місяць тому +4

    Very interesting non-trivial use case. Love the retrieval of user data and persisting of state. Use of mermaid to visually confirm the graph definition is extremely helpful.

  • @andrebadini3573
    @andrebadini3573 Місяць тому +6

    Thank you for providing such valuable and practical tutorials that offer real-world benefits for both users and businesses.

  • @zacboyles1396
    @zacboyles1396 Місяць тому +2

    This was a great demonstration. Thanks for putting it together, it was really thorough and well done.
    Was anyone else happy to see as little as possible about runnables? I could be wrong but think LCEL has been a massive detour that set LangChain way back. With this demo and a few others on LangGraph, I’ve started to get the feeling things are coming back together.

  • @alchemication
    @alchemication Місяць тому +5

    Thanks for the video. Very useful thoughts to consider for scaling up. It would be interesting to see how could we add something like memory, so agents understand the bigger context about what the user has done in the past, to personalise the experience.

  • @andreamontefiori5727
    @andreamontefiori5727 Місяць тому +3

    Thank you, really useful, informative and interesting video. I spent the first 18 minutes sweating with battery level angst 😅

  • @chorltondragon
    @chorltondragon Місяць тому +2

    Great video. In a project I've just completed I did see some of the benefits of a multi-agent design (simpler than this one). I also saw some of the limitations of LLMs if you attempt to put everything in a single prompt. This video presents a much more structured way of looking at the problem. Thank-you :)

  • @diegocalderon3221
    @diegocalderon3221 7 днів тому

    I think you made a great point at 5:51 in that adding tools/skills or more agents or decisions can actually work against your goal. I think of this as “convergence” toward the user objective.

  • @mukilloganathan1442
    @mukilloganathan1442 Місяць тому +1

    Love seeing Will on the channel!

  • @kenchang3456
    @kenchang3456 Місяць тому +3

    Thank you very much. Hell of a video 🙂

  • @emiliakoleva3775
    @emiliakoleva3775 Місяць тому

    Great tutorial! I would like to see soon some example in a task oriented dialogue

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology Місяць тому +1

    I haven’t used any embedding models in Ollama yet. One of the reasons is the TTL. I did notice in the upgrade that we can set the time to live keeping the model loaded for embeddings. .

  • @byeebyte
    @byeebyte 27 днів тому

    🎯 Key Takeaways for quick navigation:
    00:44 *🚧 Improving the User Experience of Customer Support Chatbots*
    00:46 *💼 Enhanced Control over the User Experience*
    Made with HARPA AI

  • @emko6892
    @emko6892 Місяць тому

    Impressive🎉 Can groqcloud be used due to it faster response alongside an interactive UI

  • @ANKURDIVEKAR
    @ANKURDIVEKAR Місяць тому +2

    Thanks for an awesome tutorial. The github link to the code is broken though.

  • @keenanfernandes1130
    @keenanfernandes1130 Місяць тому

    Is there a way to make LangGraph session based, I have been able to do this with Agents using RunnableWithMessageHistory, but using the Supervisor and Agent I couldn't figure out a way to implement session based converstations/workflows

  • @StoryWorld_Quiz
    @StoryWorld_Quiz Місяць тому

    do you have any advice on using other llm models?

  • @XShollaj
    @XShollaj Місяць тому +5

    Thank you for the excellent tutorials. Some constructive feedback though, would be to show more love to open source models , and integrate them more in your tutorials instead of just using OpenAI, Anthropic or other closed source models.
    Newer models like Llama 3, Mixtral 8x22b are good enough to incorporate on your examples and videos (but also tools).

    • @willfu-hinthorn
      @willfu-hinthorn Місяць тому +1

      :) working on it!

    • @_rd_kocaman
      @_rd_kocaman 18 днів тому

      exactly. llama3 is good enough for 90% of use cases

  • @lavamonkeymc
    @lavamonkeymc Місяць тому

    Question: If I have a data preprocessing agent that has access to around 20 preprocessing tools, what is the best way to go about executing them on a pandas data frame? Do I have the data frame in the State and then pass that input in the function? Does the agent need to have access to that data frame or can we abstract that?

    • @willfu-hinthorn
      @willfu-hinthorn Місяць тому

      Ya I'd put the dataframe in the state in this case. The agent would probably benefit from seeing the table schema (columns) and maybe an example row or two so it knows what types of values lie within it.
      Re: tool organization. It's likely your agent will struggle a bit with 20 tools to choose from, I'd work on trying to simplify things as much as possible by reducing the number of choices the LLM has to make

  • @ersaaatmeh9273
    @ersaaatmeh9273 28 днів тому

    when I am using llama3 or mistral it doesn't recognize the tools, does anyone try it?

  • @kunalsolanki5868
    @kunalsolanki5868 Місяць тому +7

    Did anyone try this with Llama 3?

    • @iukeay
      @iukeay Місяць тому +3

      Yep. You will need to be careful with the context window but there is some great work arounds for it .
      Also need to customize the system prompt a little bit for some of the workflows

  • @orlandojosekuanbecerra522
    @orlandojosekuanbecerra522 Місяць тому

    Could you add reflection on LangGraph nodes ?

  • @umaima629
    @umaima629 Місяць тому

    Is this code available on git? Pls share link

  • @sakshamdutta6366
    @sakshamdutta6366 20 днів тому

    how can i deploy a langraph ?

  • @darwingli1772
    @darwingli1772 Місяць тому

    I tried the notebook and swapped using the OpenAI instead of Claude. But it enters a continuous loop and not output anything except consuming token. Am I missing something?

    • @williamhinthorn1409
      @williamhinthorn1409 Місяць тому

      Hm I’ll run on other models - got a trace link you can share?

    • @Leboniko
      @Leboniko Місяць тому +5

      He/she expects to get some kind of feedback/error to work with and now ask for help. Your comment demoralizes progress and curiosity. It's a bully comment. Get off youtube and go build something.

    • @Slimshady68356
      @Slimshady68356 Місяць тому

      ​@@choiswimmer man will is 100 times a engineer you ever will be , this code design is best what I can see

    • @willfu-hinthorn
      @willfu-hinthorn Місяць тому +3

      Looks like some checks I added to handle some Claude API inconsistencies didn't play well with OAI - pushed up a fix to make it bit more agnostic to the model provider

  • @Ctenaphora
    @Ctenaphora Місяць тому +11

    Please charge your computer.

    • @darkmatter9583
      @darkmatter9583 12 днів тому

      always the same with the videos or the audio issues, videos are great but even me i would buy for myself a microphone at amazon for his videos because he is really good and have to keep updating the opensource community or raise a crowdfunding to buy him a better microphone

  • @gezaroth
    @gezaroth 10 днів тому

    valuable content, but im having an error, when i run the first example conversation it says i dont have a backup.sqlite file, and i cant get it, is there any other url? even if i copy the 1st travel2.sqlite and change the name to travel2.backup.sqlite, its not working :( 😢

  • @sharofazizmatov1000
    @sharofazizmatov1000 29 днів тому

    Hello. First of all thank you for this video. I am trying to follow you but when I run part_1 I am getting an error in checkpoints and I stuck there. Can you help me to understand what is happening
    File C:\Python311\Lib\site-packages\langgraph\channels\base.py:117, in create_checkpoint(checkpoint, channels)
    115 """Create a checkpoint for the given channels."""
    116 ts = datetime.now(timezone.utc).isoformat()
    --> 117 assert ts > checkpoint["ts"], "Timestamps must be monotonically increasing"
    118 values: dict[str, Any] = {}
    119 for k, v in channels.items():
    AssertionError: Timestamps must be monotonically increasing

    • @ersaaatmeh9273
      @ersaaatmeh9273 28 днів тому

      did you solve it?

    • @sharofazizmatov1000
      @sharofazizmatov1000 28 днів тому

      @@ersaaatmeh9273 No. I couldn't find a solution

    • @willfu-hinthorn
      @willfu-hinthorn 22 дні тому +1

      @@sharofazizmatov1000 I think we fixed this in the most recent relase. Tl;dr, windows timestamping precision was insufficient for our checkpointer.