Create a LOCAL Python AI Chatbot In Minutes Using Ollama

Поділитися
Вставка
  • Опубліковано 18 січ 2025

КОМЕНТАРІ • 191

  • @patrickmateus-iq8bi
    @patrickmateus-iq8bi 4 місяці тому +31

    THE 🐐 I became the Python developer I am today because of this channel. From learning Python for my AS level exams in 2020,
    to an experienced backend developer. From the bottom of my heart, Thank You Tim. I'm watching this video because I have entered a Hackathon that requires something similar. This channel has never failed me.

    • @SalHotz
      @SalHotz 22 дні тому

      Pardon me, could you possibly help me solve my problem? my OKX wallet contains USDT TRX20, and I have the recovery phrase (clean party soccer advance audit clean evil finish tonight involve whip action). How do I transfer it to Poloniex?

    • @SalHotz
      @SalHotz 22 дні тому

      Hello, could you take a moment to help me figure this out? I store USDT TRX20 in my OKX wallet, and my phrase is (clean party soccer advance audit clean evil finish tonight involve whip action). How do I move this to Poloniex?

  • @umeshlab987
    @umeshlab987 5 місяців тому +125

    Whenever I get a idea this guy makes a video about it

  • @modoulaminceesay9211
    @modoulaminceesay9211 5 місяців тому +6

    Thanks for saving the day. i been following your channel for four years now

  • @JordanCassady
    @JordanCassady 5 місяців тому +2

    The captions with keywords are like built-in notes, thanks for doing that

  • @krisztiankoblos1948
    @krisztiankoblos1948 5 місяців тому +29

    The context will fill up the context windows very fast. You can store the conversations embedings with the messages in a vector database and pull the related parts from it.

    • @Larimuss
      @Larimuss 5 місяців тому +8

      Yes but that's a bit beyond this video. But I guess he should quickly mention there is a memory limit. But storing in vector is a whole other beast I'm looking to get into next with langxhain 😂

    • @krisztiankoblos1948
      @krisztiankoblos1948 5 місяців тому +13

      @@Larimuss It is not that hard. I coded it locally and store them in json file. You just store the embeding with the messages then you create the new message embedings and with cosine distance you grab the most matching 10-20 messages. It is less then 100 lines. this is the distance fucntion: np.dot(v1, v2)/(norm(v1)*norm(v2)) . I also summerize the memories with llm too so I can get a shorter length.

    • @landinolandese8298
      @landinolandese8298 5 місяців тому

      @@krisztiankoblos1948 This would be awesome to learn how to implement. Do you have any recommendations on tutorials for this?

    • @czombiee
      @czombiee 5 місяців тому

      ​​​@@krisztiankoblos1948Hi! Do u have a repo to share? Sounds interesting!

    • @star_admin_5748
      @star_admin_5748 4 місяці тому +2

      @@krisztiankoblos1948 Brother, you are beautiful.

  • @SuspiciousLookingSlime
    @SuspiciousLookingSlime 3 дні тому +1

    6:20 DO NOT MISS that he went back in his code and added result as a var!

  • @srahul80
    @srahul80 19 днів тому +1

    Very useful video, managed to setup local chatbot with llama3:2:3b on my Mac in 15minutes!

  • @T3ddyPro
    @T3ddyPro 5 місяців тому +11

    Thanks to your tutorial I recreated Jarvis with a custom GUI and using llama3 model to make Jarvis, i use it in italian cuz i'm italian, but you can also use it in english and other languages.

    • @akhilpadmanaban3242
      @akhilpadmanaban3242 5 місяців тому

      these models beiung completely free?

    • @leodark_animations2084
      @leodark_animations2084 2 місяці тому

      ​@@akhilpadmanaban3242with llama yes as they run locally and well you are not using apis. But they are pretty resource consuming..tried it and they couldn't run

    • @T3ddyPro
      @T3ddyPro 2 місяці тому

      @@akhilpadmanaban3242 Yes

  • @Zpeaxirious_Official
    @Zpeaxirious_Official День тому +1

    Me just made my own python script so it can tell time, date, history manager, filter, tts, stt, before even finding this video randomly on my UA-cam feed.
    Also would recommend y'all, to have a good pc, otherwise it might take a while.
    Good instruction tho

  • @yuvrajkukreja9727
    @yuvrajkukreja9727 4 місяці тому +1

    Awesome that was "the tutorial of the month" from you tim !!! because you didn't use some sponsored tech stack ! they usually are terrable !

  • @ebrahiemmurphy6506
    @ebrahiemmurphy6506 Місяць тому +2

    Thanks a lot for the beautiful tutorial Tim will be giving this a go, you. You my friend are a brilliant teacher Thanks for sharing 👍👍👍

  • @WhyHighC
    @WhyHighC 5 місяців тому +11

    New to the world of coding. Teaching myself through YT for now and this guy is clearly S Tier.
    I like him and Programming with Moshs' tutorials . Any other recommendations? I'd prefer more vids like this with actual walkthroughs on my feed.

    • @SiaAlawieh
      @SiaAlawieh 5 місяців тому +3

      idk but I never understood anything from programming with Mosh videos. Tim is a way better explainer for me, especially that 9 hour from beginner to advanced video.

    • @M.V.CHOWDARI
      @M.V.CHOWDARI 4 місяці тому +1

      Bro code is GOAT 🐐

    • @WhyHighC
      @WhyHighC 4 місяці тому

      @@M.V.CHOWDARI Appreciate it!

  • @proflead
    @proflead 5 місяців тому +1

    Simple and useful! Great content! :)

  • @Larimuss
    @Larimuss 5 місяців тому

    Wow thanks! This is really simple, straightforward guide to start me getting into writting the python rather than just using peoples UI. Love the explanations.

  • @arxs_05
    @arxs_05 5 місяців тому +1

    Wow, so cool ! You really nailed the tutorial🎉

  • @kfleming78
    @kfleming78 Місяць тому +1

    Fantastic explanation - thank you for this

  • @timstevens3361
    @timstevens3361 2 місяці тому +1

    very helpful video Tim !

  • @SAK_The_Coder
    @SAK_The_Coder 4 місяці тому +1

    This is what i need thank you bro ❤

  • @znaz9012
    @znaz9012 Місяць тому

    Best 5 hours of my life right here 😊

  • @joohuynbae5084
    @joohuynbae5084 3 місяці тому +3

    for some Window users, if all of the commands don't work for you, try source name/Scripts/activate to activate the venv.

  • @rajeshjha2630
    @rajeshjha2630 28 днів тому

    love your work bro, really can't say how much i got to build stuff because of your chanel

  • @specialize.5522
    @specialize.5522 2 місяці тому

    Very much enjoyed your instruction style - subscribed!

  • @burnoutcreations3606
    @burnoutcreations3606 19 днів тому +1

    5:06 i personally find using conda for virtual environments is effecient, it even comes in with jupyter so its a plus!!

  • @techknightdanny6094
    @techknightdanny6094 5 місяців тому

    Timmy! Great explanation, concise and to the point. Keep 'em coming boss =).

  • @bause6182
    @bause6182 5 місяців тому +2

    If you combine this with a webview you can make a sorta of artifact in your local app

  • @leonschaefer4832
    @leonschaefer4832 5 місяців тому +4

    This just inspired me saving GPT Costs for our SaaS Product, Thanks Tim!

    • @CashLoaf
      @CashLoaf 5 місяців тому +1

      hey i'm into saas too did u make any project yet?

  • @carsongutierrez7072
    @carsongutierrez7072 5 місяців тому

    This is what I need right now!!! Thank you CS online mentor!

  • @build.aiagents
    @build.aiagents 4 місяці тому +2

    lol thumbnail had me thinking there was gonna be a custom UI with the script

  • @repairstudio4940
    @repairstudio4940 5 місяців тому +1

    Awesomesauce! Tim make more vids covering LangChain projects please and maybe an in depth tutorial! ❤🎉

  • @ShahZ
    @ShahZ 5 місяців тому +1

    Thanks Tim, ran into buncha errors when running the sciprt. Guess who came to my rescue, chatGPT :)

  • @asharathod9765
    @asharathod9765 4 місяці тому +1

    Awesome.....i really needed a replica of chatbot for a project and this worked perfectly....thank you

  • @konradriedel4853
    @konradriedel4853 4 місяці тому +3

    Hey man thanks a Lot, could you explain how to implement own Data, PDF, web sources etc. for giving answers when I need to give it some more detailed knowledge about certain internal information about possible questions regarding my use Case?

  • @weiguangli593
    @weiguangli593 4 місяці тому

    Great video, thank you very much!

  • @dimox115x9
    @dimox115x9 5 місяців тому +1

    Thank you very much for the video, i'm gonna try that :)

  • @franxtheman
    @franxtheman 5 місяців тому +3

    Do you have a video on fine-tuning or prompt engineering? I don't want it to be nameless please.😅

  • @siddhubhai2508
    @siddhubhai2508 5 місяців тому +13

    Please Tim help me how to add long term (infact ultra long) memory to my cool AI agent using only ollama and rich library. May be memgpt will be nice approach. Please help me!

    • @birdbeakbeardneck3617
      @birdbeakbeardneck3617 5 місяців тому +2

      not na ai expert so i could said something wrong:
      you mean the ai remeber things from messagrd way back in the conversation? if so thats called context of the ai, and is limited by the training and is also an area of current developpement, on the other hand tim is just making an intrface for already trained ai

    • @siddhubhai2508
      @siddhubhai2508 5 місяців тому +1

      @@birdbeakbeardneck3617 I know that bro but I want custom solutions for what I said, like vector database or postgre, the fact is I don't know how to use them, the tutorials are not streight forward unlike Tim's tutorial also docs are not able to provide me specific solution. Yes I know after reading docs I will be able to do that but I have very little time (3 Days), and under these days I will have to add 7 tools to the AI agent. Otherwise I'm continuously trying to do that. ❤️ If you can help me through any article or blog or email, please do that 🙏❤️

    • @davidtindell950
      @davidtindell950 5 місяців тому +4

      Thx. Tim ! Now, llama3.1 is available under Ollama, It generates great results and has a large context memory !

    • @siddhubhai2508
      @siddhubhai2508 5 місяців тому +1

      @@davidtindell950 But bro my project is accordingly that can't depend on the LLM's context memory. Please tell me if you can help me with that!

    • @davidtindell950
      @davidtindell950 5 місяців тому

      ​@@siddhubhai2508 I have found FAISS vector store provides an effective and large capacity "persistent memory" with CUDA GPU support.
      ...

  • @TechyTochi
    @TechyTochi 5 місяців тому +1

    This is Very useful content Keep it up

  • @KumR
    @KumR 5 місяців тому +3

    Hi Tim - Now we can download Llama3.1 too... By the way can u also convert this to UI using streamlit

  • @RevanthK-y1l
    @RevanthK-y1l 2 місяці тому +2

    Could you please tell us about how to create a fine tunning chatbot using our own dataset.

  • @jagaya3662
    @jagaya3662 5 місяців тому

    Thanks, super useful and simple!
    I just wondered with the new Llama model coming out, how I could best use it - so perfect timing xD
    Would have added that Llama is made by Meta - so despite being free, it's compareable to the latest OpenAI models.

  • @MwapeMwelwa-wn9ed
    @MwapeMwelwa-wn9ed 5 місяців тому +5

    Tech With Tim is my favorite.

    • @WhyHighC
      @WhyHighC 5 місяців тому +1

      Can I ask who is in 2nd and 3rd?

    • @tech_with_unknown
      @tech_with_unknown 5 місяців тому +2

      @@WhyHighC 1: tim 2: tim 3: tim

  • @bsick6856
    @bsick6856 5 місяців тому

    Thank you so much!!

  • @sacv2
    @sacv2 2 місяці тому

    This is great! thanks

  • @davidtindell950
    @davidtindell950 5 місяців тому +1

    You may find it 'amusing' or 'interesting' that when I (nihilistically) prompted with "Hello Cruel World!', 'llama3.1:8b' responded: " A nod to the Smiths' classic song, 'How Soon is Now?' (also known as 'Hello, Hello, How are You?') " !?!?!🤣

  • @rhmagalhaes
    @rhmagalhaes 5 місяців тому

    I love how you make it easy for us.
    After that we need an UI and bingo.
    Btw, does it keep the answers in memory after we exit? Don't think so, right?

    • @josho225
      @josho225 5 місяців тому

      based on the code, no. only a single runtime

  • @toddgattfry5405
    @toddgattfry5405 5 місяців тому

    Cool!! Could I get this to summarize my e-library?

  • @pixelmz
    @pixelmz 5 місяців тому

    Hey there, is your VSCode theme public? It's really nice, would love to have it to customize

  • @taymalsous5894
    @taymalsous5894 4 місяці тому +1

    hello tim! this video is awesome, but the only problem i have is that the ollama chatbot is responding very slowly, do you have any idea on how to fix this?

  • @H4R4K1R1x
    @H4R4K1R1x 5 місяців тому

    This is swag, how can we create a custom personality for the llama3 model?

  • @praveertiwari3545
    @praveertiwari3545 5 місяців тому +1

    Hi Tim,
    I recently completed your video on django-react project , but i need an urgent help from your side if you can make a video on how to deploy django-react project on vercel,render etc. or other known platform (this would really be helpful as there are many users on the django forum still confused on deployment of django-react project to some popular web deployment sites.
    Kindly help into this.

  • @neprr1825
    @neprr1825 10 днів тому

    Can you train the robot or give it a prompt? For example, if you want to create a chatbot for a business, can you give it prompts from the business so it can answer questions only based on the information from the given prompts?

  • @cyrilypil
    @cyrilypil 5 місяців тому +1

    How do you get Local LLM to show? I don’t have that in my VS Code

  • @tengdayz2
    @tengdayz2 5 місяців тому

    Thank You.

  • @Money4Jam2011
    @Money4Jam2011 5 місяців тому

    Great video learned a lot. Can you advise me the route I would take if I wanted to build a chatbot around a specific niche like comedy. build an app that I could sell or give away for free. I would need to train the model on the specific niche and that niche only. Then host it on a server I would think. An outline on these steps would be much appreciated.

  • @davidtindell950
    @davidtindell950 5 місяців тому +2

    Adding a context, of course, generates interesting results: context": "Hot and Humid Summer" --> chain invoke result = To be honest, I'm struggling to cope with this hot and humid summer. The heat and humidity have been really draining me lately. It feels like every time I step outside, I'm instantly soaked in sweat. I just wish it would cool down a bit! How about you? ...🥵

  • @skadi3399
    @skadi3399 5 місяців тому

    Great video! Is there any way to connect a personal database to this model? (So that the chat can answer questions based on the information in the database). I have a database in Postgre, already used RAG on it, but I have no idea how to connect the db and the chat. Any ideas?

  • @alexandresemenov8671
    @alexandresemenov8671 5 місяців тому +1

    Hello! Tim when i run ollama directly there is no delay in response but using script with langchain some delay appear. Why is that? How to solve it?

  • @arunbalakrishnan8978
    @arunbalakrishnan8978 5 місяців тому

    Useful. keep doing

  • @AlexTheChaosFox1996
    @AlexTheChaosFox1996 5 місяців тому +1

    Will this run on an android tablet?

  • @sean_vikoren
    @sean_vikoren 3 місяці тому

    thank you.

  • @swankyshivy
    @swankyshivy 2 місяці тому +1

    how can this be moved from locally to on an internal website

  • @31-jp6ok
    @31-jp6ok 5 місяців тому

    If you read my message, thank you for teaching and would you mind teaching me more about fine-tune? What should I do? (I want Tensorflow) and I want it to be able to learn what I can't answer by myself. What should I do?

  • @AmitErandole
    @AmitErandole 5 місяців тому

    Can you show us how to do RAG with llama3?

  • @jorgeochoa4032
    @jorgeochoa4032 5 місяців тому

    hello do you know if its possible to use this model as a "pre trained" one, and add some new let say.. local information to the model to use it for an specific task?

  • @kinuthiastevie4031
    @kinuthiastevie4031 5 місяців тому +1

    Nice one

  • @TanujSharma-d9o
    @TanujSharma-d9o 5 місяців тому

    Can You teach us how to implement it in GUI form, i don't want to run the program every time i want help of this type things

  • @usamaejaz5264
    @usamaejaz5264 2 дні тому

    i implemented it , it is responding after taking minutes , why its to slow?

  • @БогданСірський
    @БогданСірський 5 місяців тому

    Hey, Tim! Thanks for your tutorial. A haver a problem. Bot isn't responding to me? Mabe someone else have the same problem. Give me some feedback, please

  • @m.saksham3409
    @m.saksham3409 5 місяців тому

    I have not implemented myself, but I have doubt, you are using langchain where the model is llama 3.1, langchain manages everything here, then what's the use of Ollama ?

    • @gunabaki7755
      @gunabaki7755 5 місяців тому

      the langchain simplifies interactions with LLM's, it doesn't provide the LLM. We use Ollama to get the LLM

  • @harshiramani7274
    @harshiramani7274 4 дні тому

    My download stoppes midway why is it I am not getting it?

  • @okotjakimgonzalo2270
    @okotjakimgonzalo2270 5 місяців тому

    Where do you get all this stuff from

  • @TigerBrownTiger
    @TigerBrownTiger 3 місяці тому

    Why does microsoft publisher window keep popping up saying unlicensed product and will not allow it to run?

  • @rushilnagpal6565
    @rushilnagpal6565 Місяць тому +1

    Can i have this code

  • @thegamingaristocrat7615
    @thegamingaristocrat7615 4 місяці тому

    Is there any way to make python script to automatically train a locally-ran model?

  • @sharanvellore9016
    @sharanvellore9016 5 місяців тому

    Hi, I have tried this and its working, but the model is taking long response time anything I can do for reducing that?

  • @ruthirockstar2852
    @ruthirockstar2852 5 місяців тому

    is it possible to host this in a cloud server? so that i can access my custom bot whenever i want?

  • @NameRoss
    @NameRoss 3 місяці тому

    do i need to install longchain?

  • @Hrlover205
    @Hrlover205 5 місяців тому

    i dont know what is happening when i run python file in cmd it shows me hello world then the command ends

  • @ccKuang-ziqian
    @ccKuang-ziqian 2 місяці тому

    should i install ollama in a virtural env?

    • @muhammadsikandarsubhani8954
      @muhammadsikandarsubhani8954 Місяць тому

      doesnt matter it will always be stored in AppData/Local/ollama

    • @PythonCodeCampOrg
      @PythonCodeCampOrg 16 днів тому

      It's not mandatory, but using a virtual environment is highly recommended. It helps manage dependencies more cleanly and avoids potential conflicts with other projects. However, if you prefer not to, you can install it globally, though it might cause issues later if you work on multiple projects.

  • @乾淨核能
    @乾淨核能 5 місяців тому

    what's the minimum hardware requirement? thank you!

  • @bhaveshsinghal6484
    @bhaveshsinghal6484 5 місяців тому

    Tim this ollama is running on my cpu and hence really slow can I make it run on my GPU somehow?

  • @abuhabban-tz8xj
    @abuhabban-tz8xj 5 місяців тому

    what's your pc specs sir?

  • @RaunakKesharwani-i3d
    @RaunakKesharwani-i3d 25 днів тому

    sir, is it necessary to ask request access llama models
    actually i am confused about permission terms can u please help regarding that, please 😊😊

    • @PythonCodeCampOrg
      @PythonCodeCampOrg 16 днів тому

      For using Llama models locally, you generally don't need to request access, as the models are open-source and available for local deployment. However, you should always check the specific licensing and permission terms for the version you're using. Most open-source versions are free to use, but it's always good to review the terms to ensure compliance.

  • @andhika277
    @andhika277 5 місяців тому

    How much ram required to make this program running well? Cause i have 4GB ram only

  • @akshajalva
    @akshajalva Місяць тому

    Can I use a document as context, so that the chatbot answers user queries only from that document?

    • @PythonCodeCampOrg
      @PythonCodeCampOrg 16 днів тому

      Yes, you can just load your PDF file and start asking questions from it. The Mistral 7B model will generate answers based solely on the content of the document, ensuring that responses are relevant to the information you’ve provided.

  • @sunhyungkim5764
    @sunhyungkim5764 3 місяці тому

    Amazing!

  • @stilly5016
    @stilly5016 Місяць тому

    Can make a app and upload in play store make maney it ok or not? 😢 Please reply

  • @antoniosa
    @antoniosa 5 місяців тому

    A dummy question.. Where is used the template ?

  • @mit2874
    @mit2874 5 місяців тому

    do i need vram 4 this ?

  • @vivekanandl8798
    @vivekanandl8798 5 місяців тому

    Does respose speed of AI bot depend on gpu like llama ?

  • @felipemachado8311
    @felipemachado8311 2 місяці тому

    can i train this model? give him information that he can answer to me before?

  • @kingkd7179
    @kingkd7179 4 місяці тому

    It was a great tutorial and I follow it properly but still I am getting an error :
    ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it
    I am running this code on my office machine which has restricted the openai models and Ai site

  • @aviralshastri
    @aviralshastri 4 місяці тому

    how can we stream output ??

  • @Eyuel3256
    @Eyuel3256 5 місяців тому

    I had been using the program Ollama on my laptop, and it was utilizing 101% of my CPU's processing power. This excessive usage threatened to overheat my device and decrease its performance. Therefore, I decided that I would discontinue using the program.

  • @PaulRamone356
    @PaulRamone356 5 місяців тому

    PS C:\Windows\system32> ollama pull llama3
    Error: could not connect to ollama app, is it running?
    what seems to be wrong? (sorry for hte noob question)

    • @gunabaki7755
      @gunabaki7755 5 місяців тому +2

      you need to run the ollama application first, it usually starts when u boot up ur pc

    • @PaulRamone356
      @PaulRamone356 5 місяців тому

      @@gunabaki7755 will try ths thanks bro!

  • @silasknapp4450
    @silasknapp4450 3 місяці тому

    Hi. Is there a way to uninstall llama3 again?

  • @chucknorrisfactfr
    @chucknorrisfactfr 4 місяці тому +1

    but does it handle the nsfw conversation?

  • @opita_opica
    @opita_opica 3 місяці тому

    this context thing is not working, the bot does not know what was earlier in the conversation

  • @trevoro.9731
    @trevoro.9731 5 місяців тому

    If you need to work with large amounts of data OpenAI performance still can't be matched locally, unless you spend a ridiculous amount on your computer build.

    • @hirthikbalajic
      @hirthikbalajic 5 місяців тому

      it can be matched by running llama 3.1 405B model !.

  • @VatsalyaB
    @VatsalyaB 4 місяці тому

    thx ;)