Decoder
Decoder
  • 7
  • 217 756
LangChain Fundamentals: Build your First Chain
LangChain is one of the most popular frameworks for coding complex LLM-powered logic. It provides the ability to batch and stream calls across different LLM providers, vector databases, 3rd party APIs, and much more. In this video, we explore the very basics of getting started with LangChain - understanding how to build a rudimentary chain complete with templating and an LLM call. Let's go!
Links:
Code from video - decoder.sh/videos/langchain-fundamentals:-build-your-first-chain
LangChain - langchain.com
Ollama Integration - api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html
Prompts & Templates - python.langchain.com/v0.1/docs/modules/model_io/prompts/quick_start/
Timestamps:
00:00 - Intro
00:25 - Set up Environment
02:41 - Introducing Runnable
03:15 - Message Format
03:52 - ChatModel
05:10 - Why are there so many ways to do the same thing?
06:05 - Types of Messages
07:10 - Introducing Templates
11:12 - Combining Templates w/ LLMs
12:09 - Introducing Pipe
12:36 - Running our chain
13:36 - Review
Переглядів: 7 674

Відео

Meta's Llama3 - The Mistral Killer?
Переглядів 2 тис.8 місяців тому
Meta's LLama3 family of models in 8B and 30B flavors was just released and is already making waves in the open source community. With a much larger tokenizer, GQA for all model sizes, and 7.7 million GPU hours spent training on 15 TRILLION tokens, LLama3 seems primed to overtake incumbent models like Mistral and Gemini. I review the most important parts of the announcement before testing the ne...
RAG from the Ground Up with Python and Ollama
Переглядів 38 тис.9 місяців тому
Retrieval Augmented Generation (RAG) is the de facto technique for giving LLMs the ability to interact with any document or dataset, regardless of its size. Follow along as I cover how to parse and manipulate documents, explore how embeddings are used to describe abstract concepts, implement a simple yet powerful way to surface the most relevant parts of a document to a given query, and ultimat...
LLM Chat App in Python w/ Ollama-py and Streamlit
Переглядів 10 тис.10 місяців тому
In this video I walk through the new Ollama Python library, and use it to build a chat app with UI powered by Streamlit. After reviewing some important methods from this library, I touch on Python generators as we construct our chat app, step by step. Check out my other Ollama videos - ua-cam.com/play/PL4041kTesIWby5zznE5UySIsGPrGuEqdB.html Links: Code from video - decoder.sh/videos/llm-chat-ap...
Importing Open Source Models to Ollama
Переглядів 42 тис.11 місяців тому
Hugging Face is a machine learning platform that's home to nearly 500,000 open source models. In this video, I show you how to download, transform, and use them in your local Ollama setup. Get access to the latest and greatest without having to wait for it to be published to Ollama's model library. Let's go! Check out my other Ollama videos - ua-cam.com/play/PL4041kTesIWby5zznE5UySIsGPrGuEqdB.h...
Use Your Self-Hosted LLM Anywhere with Ollama Web UI
Переглядів 81 тис.11 місяців тому
Take your self-hosted Ollama models to the next level with Ollama Web UI, which provides a beautiful interface and features like chat history, voice input, and user management. We'll also explore how to use this interface and the models that power it on your phone using the powerful Ngrok tool. Watch my other Ollama videos - ua-cam.com/play/PL4041kTesIWby5zznE5UySIsGPrGuEqdB.html Links: Code fr...
Installing Ollama to Customize My Own LLM
Переглядів 38 тис.Рік тому
Ollama is the easiest tool to get started running LLMs on your own hardware. In my first video, I explore how to use Ollama to download popular models like Phi and Mistral, chat with them directly in the terminal, use the API to respond to HTTP requests, and finally customize our own model based on Phi to be more fun to talk to. Watch my other Ollama videos - ua-cam.com/play/PL4041kTesIWby5zznE...

КОМЕНТАРІ

  • @AniruddhTiwari-q9b5w
    @AniruddhTiwari-q9b5w 7 днів тому

    This was great, would love to see more content from you. Cheers!

  • @masouddeylami969
    @masouddeylami969 11 днів тому

    Thanks for sharing the code.

  • @Marceloamado1998
    @Marceloamado1998 17 днів тому

    You are a great teacher you clearly break concepts down and make it easier to digest the information. Thank you!

  • @noormohammedshikalgar
    @noormohammedshikalgar 18 днів тому

    Very simple and knowledgeable video, thanks man -> Good work.

  • @gregdotcomm
    @gregdotcomm 18 днів тому

    shouldn't there be a free way to access a local port on the internet other than ngrok?

    • @decoder-sh
      @decoder-sh 17 днів тому

      Yes, with a good amount of caveats about security, availability, and accessibility

  • @QuizmasterLaw
    @QuizmasterLaw 22 дні тому

    easiest way around 2:12 is to paste the .txt into a markdown editor and save as .md idk why but it solves the line breaks.

  • @arkadiuszkowalski6753
    @arkadiuszkowalski6753 25 днів тому

    Thx, very helpful.

  • @talefey
    @talefey 28 днів тому

    Thanks, I hadn't heard of ngrok before today. This worked perfectly!

  • @billmarshall383
    @billmarshall383 Місяць тому

    Great video. Is there somewhere to download the code you used? Currently, I am stopping the presentation and typing from your screen but there must be a better way. Thanks this is the best RAG description I have found.

    • @decoder-sh
      @decoder-sh Місяць тому

      Hey Bill, thanks for watching! You can find all the code from my videos here decoder.sh/videos/rag-from-the-ground-up-with-python-and-ollama

  • @kaimuller3990
    @kaimuller3990 Місяць тому

    What software are you using for the terminal and accessing ollama? I've managed to install ollama + an llm but I find the standard shell view a bit confusing. Thank you for the video!

    • @decoder-sh
      @decoder-sh Місяць тому

      Thanks for watching! I use iTerm, but any terminal should work. You can find Ollama's CLI reference here (github.com/ollama/ollama?tab=readme-ov-file#cli-reference). Note that you may need to restart your terminal app after installing ollama to be able to interact with it. Good luck!

  • @Jeff-x7d
    @Jeff-x7d Місяць тому

    Wow! Great video! You explained everything very clearly. I would love to be able to see how to do this using Langchain, etc... Thank you!

    • @decoder-sh
      @decoder-sh Місяць тому

      I made one video on LangChain so far! I've been on a longer hiatus than I wanted to this year, but I'll continue with Langchain when I'm back :) ua-cam.com/video/qYJSNCPmDIk/v-deo.html

  • @pkuioouurrsq-yb8ku
    @pkuioouurrsq-yb8ku Місяць тому

    How can we train the model with our custom data so that it produces result based on given data

  • @pkuioouurrsq-yb8ku
    @pkuioouurrsq-yb8ku Місяць тому

    how to structure the response like json content

  • @albertoavendano7196
    @albertoavendano7196 Місяць тому

    Top notch video ... Thanks a lot ... I would love to know if we can use GPT locally as a way of having other options for ollama.

    • @decoder-sh
      @decoder-sh Місяць тому

      Hi there, could you explain a bit more about what you want? I love running local models so I'd like to help figure this out. If you're asking to use OpenAI's ChatGPT: You can access it through openAI's API, but that wouldn't be a "local" model since you're sending all of your requests to OpenAI.

  • @Mulakulu
    @Mulakulu Місяць тому

    As a newbie to this, you kinda jumped 10 steps here 3:05. Also, I'm on Windows and have no experience using linux. Any documentation on how to do this in windows?

    • @decoder-sh
      @decoder-sh Місяць тому

      Good call, sorry about that! Step 1: Create a new file called "Modelfile" (the name isn't important, you can call it whatever you want) Step 2: Edit the modelfile (which is what I'm doing at 3:05). If you're not familiar with what a modelfile is or how it works, check out my older view for a refresher ua-cam.com/video/xa8pTD16SnM/v-deo.html You can view all of the code I wrote here decoder.sh/videos/importing-open-source-models-to-ollama I don't have any videos for windows unfortunately, but I believe the ollama cli is the same for all operating systems

  • @vuyombam4872
    @vuyombam4872 Місяць тому

    Great tutorial David, clear and easy to follow. You just got a new subscriber 🦾

  • @Tazzquilizer
    @Tazzquilizer 2 місяці тому

    thank you for this amazing video

  • @lilaiyad2173
    @lilaiyad2173 2 місяці тому

    is there a way which can be used if I want to let Lama answer from its own knowledge base when my own data is not relevant or is not accurate to be an answer to the user's prompt?

  • @HarishPillay
    @HarishPillay 2 місяці тому

    Important nit to pick. These llama models are not open source in any sense. They are proprietary models but you free to use it.

    • @decoder-sh
      @decoder-sh 2 місяці тому

      Valid point! Can’t run a business on llama models without licensing

  • @HarishPillay
    @HarishPillay 2 місяці тому

    Thanks for you series of videos. They are to the point and very comprehensive. Thanks again!

  • @fertfert4661
    @fertfert4661 2 місяці тому

    Я установил ngrok туда же в docker, и мне не приходится мириться с запущенным окном ngrok

    • @decoder-sh
      @decoder-sh Місяць тому

      "I installed ngrok there in docker and I don't have to deal with ngrok window running" <- that's smart! I'm sure some other viewers will benefit from that as well.

  • @hobobob11
    @hobobob11 2 місяці тому

    how do I make ngrok run in the background in docker so i can close my terminal. I am new to docker and don't know how to do it. any help is appreciated.

    • @decoder-sh
      @decoder-sh Місяць тому

      For any command you run in your terminal, you can just add a "&" after it to have it run in the background. In this case "ngrok http localhost:3000 &"

  • @jossushardware1158
    @jossushardware1158 2 місяці тому

    Great video!! Please Make instructions how to run models which are MllamaForConditionalGen arch.

  • @dhanasekhar-g6i
    @dhanasekhar-g6i 2 місяці тому

    really appreciated. well made video, now i understood the crux of LangChain.

  • @vigneshpadmanabhan
    @vigneshpadmanabhan 2 місяці тому

    Really wanna appreciate this video. There are so many langchain videos but nothing clarifies the basics this well. Also given langchain gets updated very frequently to be clear with the core this is a beautiful video. Please make a series out of these. Cover all the basics needed , new approaches etc.

  • @christoherright6430
    @christoherright6430 2 місяці тому

    This is the true tutorial explain very well only important information that need to use to implement RAG. Thanks.

  • @bimbim1862
    @bimbim1862 2 місяці тому

    FileNotFoundError: spm tokenizer.model not found.

  • @EricBurgers-qc6ox
    @EricBurgers-qc6ox 2 місяці тому

    wow, excellent vid. Thanks!

  • @kebman
    @kebman 2 місяці тому

    I'm not a software engineer, and I have over three decades of experience not being a software engineer. I know how to program though. I've done it for like a while. Also I've been teaching programming, so there's that. In other words.

  • @tokyofrostgang8302
    @tokyofrostgang8302 3 місяці тому

    6:04 - talks about quantization (how you can, using a docker command quantize your GGUF file to a smaller size/bit format... well, I think they added that feature on huggingface recently. When I went to the model mentioned in the video, well, TheBloke's version of it, there's a button that displays next the the [Train] button, it's a [Use this model] button, when you click on it, you now have a choice "Ollama"... pretty nice! Thanks for this video it was extremely helpful.

  • @Noxmyn
    @Noxmyn 3 місяці тому

    7:52 ключ был виден

    • @decoder-sh
      @decoder-sh Місяць тому

      спасибо! Я сделал ключ недействительным после того, как поделился этим видео. В следующий раз я сделаю работу лучше.

  • @stephansage
    @stephansage 3 місяці тому

    Is there a way to make same interface as in the video now (Documents, Prompts, Models as separate sections on the left menu) or there's no way and we are stuck with current lame webui version?

  • @davethorn9423
    @davethorn9423 3 місяці тому

    Thanks for the video, you kind of skip over the modelfile for the huggingface converted file at the end , how do you determine the prompt template to use ?

  • @FrankenLab
    @FrankenLab 3 місяці тому

    I liked how you used cosign similarity to show the actual chunks of matched text. Since this usually happens behind the curtains it was nice to have that extra insight. I like your style and just subscribed.

  • @Rohambili
    @Rohambili 3 місяці тому

    hi! What to do when i installed ollama with the sh script In linux? after cloned the repo...

  • @JohnSigvald
    @JohnSigvald 3 місяці тому

    Wow, thank you for this! :D

  • @remedyreport
    @remedyreport 3 місяці тому

    thanks1 I'm getting up to speed on all this info. I was wondering where to find these LLM models that Ollama Run didn't know about.

  • @republicofamerica1229
    @republicofamerica1229 3 місяці тому

    Amazing explanation. Thanks

  • @iamwhoiam7057
    @iamwhoiam7057 3 місяці тому

    i am just a beginer in all things AI and voila I implemented this video successfully' So proud that and my AI is running in my phone. Feel so empowered now. Thanks for a great video.

    • @decoder-sh
      @decoder-sh Місяць тому

      That's awesome!! Thank you so much for sharing that with me, I look forward to seeing where you go from here :)

  • @joelreyes5583
    @joelreyes5583 4 місяці тому

    This helped me a lot. Great quality and good way of explaining everything. Thank you so much

  • @acan.official
    @acan.official 4 місяці тому

    where do i put the first code, im very beginner

    • @acan.official
      @acan.official 4 місяці тому

      found it. but why does the web change everytime, can i make it fixed or custom it or something?

  • @swxin9
    @swxin9 4 місяці тому

    Dude just made my doubts clear before I finished my tea.

  • @UTubeSucksssss
    @UTubeSucksssss 4 місяці тому

    for the life of me i cant figure out how to connect open webui to ollama. 127.0.0.1:11434/ show ollama is running. I tried: host.docker.internal:11434, 127.0.0.1:11434/ at open webui ollama api still unsuccessful. Went inside the docker (docker exec -it name) and curl 127.0.0.1:11434, still no good. I delete ollama, remove all docker image etc, still no good :( the regular container without port mapping works fine though. Great videos btw, i have watched all your videos and follow your tutorial, I hope you do more ollama and langchain videos.

  • @TimothyMusson
    @TimothyMusson 4 місяці тому

    I'm really impressed with the 27B version of Gemma2. It's working well for me as a usefully competent Russian language conversation partner/tutor, which is pretty amazing for something small enough to run locally. Mistral (7B) and Llama3 (8B) weren't quite sharp enough.

  • @nandinijampana528jampana3
    @nandinijampana528jampana3 4 місяці тому

    First of all Thank you for making this vedio!!, can you also make vedio on how to handle when mulitple text files are there. Thank you.

  • @TimothyMusson
    @TimothyMusson 4 місяці тому

    I really like the way you present this stuff so clearly and directly - you have a great teaching style. I'm going to keep tinkering with the chat app: it's fun! Glad I found this channel - thanks :)

  • @szebike
    @szebike 4 місяці тому

    Awesome structure and explanation !

  • @OgeIloanusi
    @OgeIloanusi 4 місяці тому

    This is a great video. You teach like a Professor. You're an expert and well talented! Your organization will indeed love working with you.

  • @OgeIloanusi
    @OgeIloanusi 4 місяці тому

    You're great!

  • @OgeIloanusi
    @OgeIloanusi 4 місяці тому

    Thank You!!