LLM Chat App in Python w/ Ollama-py and Streamlit

Поділитися
Вставка
  • Опубліковано 17 тра 2024
  • In this video I walk through the new Ollama Python library, and use it to build a chat app with UI powered by Streamlit. After reviewing some important methods from this library, I touch on Python generators as we construct our chat app, step by step.
    Check out my other Ollama videos - • Get Started with Ollama
    Links:
    Code from video - decoder.sh/videos/llm-chat-ap...
    Ollama-py - github.com/ollama/ollama-python
    Streamlit - streamlit.io/
    My website - decoder.sh
    Timestamps:
    00:00 - Intro
    00:26 - Why not use the CLI?
    01:17 - Looking at the ollama-py library
    02:26 - Setting up Python environment
    04:05 - Reviewing Ollama functions
    04:14 - list()
    04:52 - show()
    05:44 - chat()
    06:55 - Looking at Streamlit
    07:59 - Start writing our app
    08:51 - App: user input
    11:16 - App: message history
    13:09 - App: adding ollama response
    15:00 - App: chooing a model
    17:07 - Introducing generators
    18:52 - App: streaming responses
    21:22 - App: review
    22:10 - Where to find the code
    22:27 - Thank you for 2k
  • Наука та технологія

КОМЕНТАРІ • 71

  • @bhagavanprasad
    @bhagavanprasad 2 дні тому

    Thank you for sharing the knowledge

  • @diane-sun
    @diane-sun 2 місяці тому +4

    Great stuff as always! I really appreciate you breaking down the code line-by-line. Very clear explanation.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Thanks for tuning in as always!

  • @rijeanirso
    @rijeanirso 11 днів тому

    Great Video! Precise and really easy to follow

    • @decoder-sh
      @decoder-sh  6 днів тому

      I appreciate it, thanks for watching!

  • @Aberger789
    @Aberger789 21 день тому

    Keep the videos coming, you're editing and teaching style are top notch!

  • @MrEdinaldolaroque
    @MrEdinaldolaroque 2 місяці тому +1

    Straight to the point! Thank you for sharing.

  • @akashrawat217
    @akashrawat217 2 місяці тому

    High quality stuff. Clear and concise. Keep making more such videos.

  • @spartacusnobu3191
    @spartacusnobu3191 Місяць тому

    Really clear and concise videos that actually shows how to do things instead of the tons of videos that only read and attempt to explain research papers. Keep up the good work, this will get you really far as people gradually discover your content.

    • @decoder-sh
      @decoder-sh  Місяць тому

      Thank you for watching! I do hope to expand the topics covered in my videos and will probably do some model / whitepaper reviews, though I definitely enjoy these practical examples the most.

  • @childisch315
    @childisch315 2 місяці тому

    This really helped me at my internship! Thanks a lot and keep the videos coming! 😁

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Glad to hear it - good luck at your internship!

  • @replymeasapp
    @replymeasapp 2 місяці тому +2

    We want more videos from you, please keep up the pace

    • @decoder-sh
      @decoder-sh  2 місяці тому

      I appreciate that, I want more videos from me too! I'm currently traveling but hope to post a couple that I've been working on when I return 🏃‍♂️💨

  • @gogasahab
    @gogasahab 2 місяці тому

    wonderful explanation! thanks

  • @Ghrenig
    @Ghrenig 2 місяці тому

    Excellent - very clear and concise.

  • @natekidwell
    @natekidwell Місяць тому

    Insanely good playlist and very well presented.

  • @AliAlias
    @AliAlias 2 місяці тому

    Thanks ❤
    Easy and helpful 😊

  • @theubiquitousanomaly5112
    @theubiquitousanomaly5112 2 місяці тому

    You should consider making more and more tutorials. You are the best!!

    • @decoder-sh
      @decoder-sh  2 місяці тому

      I appreciate it! More on the way :)

  • @yaa3g
    @yaa3g 2 місяці тому

    Damn good stuff, sir.

  • @mbrihoum
    @mbrihoum 2 місяці тому

    Very good job , I cant wait to see RAG applications . You are an awesome teacher.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Thank you for your support!

  • @PenicheJose1
    @PenicheJose1 Місяць тому +1

    I'm glad I found you videos , I was wondering if you ever going do a udemy class, also what text editor are you using is really clean. Thank you for taking the time to make this videos.

    • @decoder-sh
      @decoder-sh  Місяць тому

      I'm glad you found my videos as well! I haven't looked into doing a Udemy course, but that's an interesting idea. I'm using VSCode as my IDE, but I have most of the UI elements disabled for a more simplified view when I'm filming. VSCode also has a "Zen Mode" which is a similar feel.

    • @PenicheJose1
      @PenicheJose1 Місяць тому

      @@decoder-sh I also use vs code, I use it on Windows your looks really clean I really like that it's not distracting, I will try to make mine look like that.
      Yeah I think Udemy cud be a good platform for you and I haven't found many classes on this subject, I found some with teachers from India which is fine but a little hard to follow. And I really enjoy the way you explain things. I was having some problems grasping the concept on how to use all different models .. once again thank you. And maybe you can create a video in the different parts of lmm. Like how to use Ollama within agent platforms...

  • @anurajms
    @anurajms 22 дні тому

    Thank you. This is very informative. Could you post videos utilizing Chroma db persistent state To work with PDF documents and SQL database

  • @tommymalm
    @tommymalm 2 місяці тому

    Great video! A great next video would be to insert ollama functions in there so a question about the weather for example would return something like the good ol get_weather("san fransisco") example and you call some external api to get the result and then returns it to the user.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Great idea! Ollama itself does not support function calling, but I would love to cover using a model that is specifically tuned for generating function call outputs. I'll add this to my list, thanks for watching :)

    • @tommymalm
      @tommymalm 2 місяці тому

      with langchain you can get function calling with ollama@@decoder-sh

  • @CodeShockDev
    @CodeShockDev 11 днів тому +1

    Great videos, how about a llama3, streamlit, groq video?

    • @decoder-sh
      @decoder-sh  9 днів тому

      Great idea! I would love to do a video using groq

  • @bjaburg
    @bjaburg 2 місяці тому

    Fantastic stuff. I am just starting up my company and (it seems that) new clients are queueing up. Your video's are absolutely spot on so thanks again. If you need more ideas about content: i was wondering perhaps you can create one about fine-tuning or training a downloaded model (perhaps Phi)? I know OpenAI has this sleek interface of uploading 'ideal' question-answer pairs and have a trained model on that as a result. This surely should be possible using your own model, right?
    And while we are at it: how would you deploy your own model to a production server?
    Take care and keep up the good work!

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Thanks for the comment! Fine tuning is on my short list to cover, I think that should be a fun one. What’s your business, if I may ask?

  • @guanjwcn
    @guanjwcn 2 місяці тому

    Great video. Could you do a video using langchain, RAG, and streamlit? This would be very helpful.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Yes one of my next videos will be simple RAG without langchain, then I'm also working on a whole series just about langchain

  • @khalidkifayat
    @khalidkifayat 2 місяці тому

    nice tutorial. kindly correct me if i am wrong, these models shown in video were downloaded already into your system ??
    if yes then
    for a project delivery purpose to client, how would we deploy our model to a production server?

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Yes that is correct, the models shown were already downloaded to my system. If you want to add a new model, you can use the ollama-py library and do ollama.pull('someModelName'). I also have a video showing how to use models from outside of ollama here ua-cam.com/video/fnvZJU5Fj3Q/v-deo.html - Good luck!

    • @khalidkifayat
      @khalidkifayat 2 місяці тому

      @@decoder-sh for a project delivery purpose to client, how would we deploy this model to a production server?

  • @ZaferCan
    @ZaferCan Місяць тому +1

    Could a feature that voices the answers produced by the assistant be added? I tried it, but the streamlit connection closes itself without any errors after 2 or 3 messages.

    • @decoder-sh
      @decoder-sh  Місяць тому +2

      Yes absolutely! Modern browsers include a SpeechSynthesis API, so you could use this to speak the LLM responses without much effort. You might need to implement a custom Streamlit component to call the JS though.
      developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis

  • @sushicommander
    @sushicommander 2 місяці тому

    Great video. I would love to see how you would tackle having the output of one model being fed to another model, but in a chat environment. So for example qwen1.5 receives input from the user in chinese and translates into english, sends it to openhermes mistral 7B as an input, and then openhermes responds to the user. Or for example LLAVA receives a picture from the user and a question based on that picture. LLAVA recognizes the image, sends it's output and the question from the user regarding the picture to openhermes mistral 7B, which then responds to the user. The frontend could be simple react code or streamlit... Not sure if this can be considered agents, but anyways, that would be an awesome video and kind of an extension to this one.

    • @decoder-sh
      @decoder-sh  2 місяці тому +1

      I will be covering this when I start getting into langchain / llamaindex! They are frameworks specifically designed for this kind of "chaining" and routing between different models.

  • @user-pc6eh1ut5s
    @user-pc6eh1ut5s 2 місяці тому

    great work. plz make video on RAG APP using ollama, ollama Embeddings 'nomic-embed-text' and chroma or qadrant db using gradio, thanks

    • @decoder-sh
      @decoder-sh  2 місяці тому

      I plan on making one video that builds RAG with ollama "from scratch", then a series of videos that use Langchain for RAG - stay tuned!

  • @mohl-bodell2948
    @mohl-bodell2948 2 місяці тому

    Using the ollama API is good, but perhaps you could show a bit about langchain, using ollama as an example? Langchain is as close as it gets to an industry standard for accessing all sorts of models, so showing how to use it would be valuable to your growing community.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      This is a great idea! I'm currently writing a multi-part series on RAG with langchain. I want to release a couple other videos before that, but stay tuned :)

  • @txbluesguy
    @txbluesguy 2 місяці тому

    I am just getting into Ollama and Python coding. I set up Ollama and Ollama Web UI on my Docker. How can I tell a Streamlit app to use the Ollama installed in my Docker when the application I am working on is running on a different computer (in the same network)? Thank you in advance.

    • @decoder-sh
      @decoder-sh  2 місяці тому +1

      Welcome to Python and ollama! Your specific question has more to do with networking than either ollama or python. Assuming this is just a personal project, the easiest thing for you to do would be to open an SSH connection from computerA to computerB (A runs streamlit, B runs ollama/docker). You can use the ssh command to also connect to specific ports, which you would use to connect to whatever port you've exposed ollama through via docker. This is how I connect to my PC with a big GPU in it from my other computer.
      I also have a video on another way of connection to ollama remotely
      ua-cam.com/video/syR0fT0rkgY/v-deo.html
      Let me know how it goes!

    • @txbluesguy
      @txbluesguy 2 місяці тому

      @@decoder-sh Thank you for the reply. I appreciate it. I will watch the video. It so much easier to learn new programming concepts than it was when I was in high school and college (I used punch cards back in those days LOL).

    • @decoder-sh
      @decoder-sh  2 місяці тому

      @@txbluesguyI believe it! Access is no longer an issue, and instead the problem is focus - with so much stuff to learn, what do you choose to spend your time on?
      Also you and @fedorp4713 should chat about the old days

  • @user-tk5ir1hg7l
    @user-tk5ir1hg7l 2 місяці тому

    Is there a way to put system prompts to set the context in the ollama python api?

    • @decoder-sh
      @decoder-sh  2 місяці тому +1

      Yes there is! I'll make sure to show that in another video. But right now, you can add a system prompt using the chat method by just adding another message in the array with the "system" role. It should look like ollama.chat(messages=[{'role': 'system', 'content': 'you are a helpful assistant...'}, ])

    • @user-tk5ir1hg7l
      @user-tk5ir1hg7l 2 місяці тому

      awesome, thanks. Some more topics to consider: (1) a near-real time realistic sounding tts system for open source LLMs which can be run locally on a single gpu (2) optimizing LLM inference speeds for nvidia rtx gpus, maybe compare ollama and lmstudio or roll you own if possible.

  • @yassinmohammedhadjadjaoul9064
    @yassinmohammedhadjadjaoul9064 29 днів тому

    is there a model that can receive a txt or pdf file then process it based on your request?

    • @decoder-sh
      @decoder-sh  29 днів тому

      To my knowledge, there are 0 models that work with files directly - in every case you will need to extract the text from a file and pass that to the model.

    • @yassinmohammedhadjadjaoul9064
      @yassinmohammedhadjadjaoul9064 29 днів тому

      @@decoder-sh and how can i do it without copying the whole text and past it to the bot?

  • @juliamendoza2732
    @juliamendoza2732 11 днів тому

    How can I use this but not in streamlit, in a discord bot??

  • @mireazma
    @mireazma 15 днів тому

    The atoms making up everything joke curiously appears verbatim in another open LLM video. Isn't it too deterministic?

    • @decoder-sh
      @decoder-sh  15 днів тому

      It’s possible to change the temperature parameter to make it more “creative”, but I agree that many LLMs seem biased to liking this joke

  • @IronMechanic7110
    @IronMechanic7110 2 місяці тому

    I think it's much better to use Gradio than streamlit.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      I'm curious to hear why you think one is much better than the other! I'll likely use Gradio in a future video just to explore the whole landscape

  • @willwimbiscus7456
    @willwimbiscus7456 20 днів тому

    Keep getting this error:"message = response['message']["response"]
    ~~~~~~^^^^^^^^^^^
    TypeError: 'generator' object is not subscriptable"

  • @mohl-bodell2948
    @mohl-bodell2948 2 місяці тому

    Quick, succinct and well prepared as usual! However, adding a full python tutorial to your ollama tutorials might be a bridge too far. Showing the python code so we see exactly how ollama is used is important, but explaining the python parts is probably best left to a python tutorial (which might be great for your style as well). As long as you have shown the code and run it in the video, we will have a working example and can learn about python elsewhere. Mentioning names like comprehension, generator or walrus assign is useful, that makes it quicker to look up any detail we don't understand, but expect us to be experienced developers who know python (or whatever language you are showing) well. Do continue explaining exactly how you pick apart the ollama responses; ollama is what we are here for.

    • @decoder-sh
      @decoder-sh  2 місяці тому

      Great feedback, thank you! You’re totally correct that I’m mixing Python tutorial with ollama project here - I’ll keep the next one more focused.

    • @takimdigital3421
      @takimdigital3421 2 місяці тому +1

      No you good , with this way I was able to learn some python , Ollama and streamlit. Not everyone know how to code . 😊

    • @whateveranyhow8903
      @whateveranyhow8903 Місяць тому

      I like the way you clearly explain everything, so I can gain a much better understanding of what is going on.