THE 🐐 I became the Python developer I am today because of this channel. From learning Python for my AS level exams in 2020, to an experienced backend developer. From the bottom of my heart, Thank You Tim. I'm watching this video because I have entered a Hackathon that requires something similar. This channel has never failed me.
Pardon me, could you possibly help me solve my problem? my OKX wallet contains USDT TRX20, and I have the recovery phrase (clean party soccer advance audit clean evil finish tonight involve whip action). How do I transfer it to Poloniex?
Hello, could you take a moment to help me figure this out? I store USDT TRX20 in my OKX wallet, and my phrase is (clean party soccer advance audit clean evil finish tonight involve whip action). How do I move this to Poloniex?
The context will fill up the context windows very fast. You can store the conversations embedings with the messages in a vector database and pull the related parts from it.
Yes but that's a bit beyond this video. But I guess he should quickly mention there is a memory limit. But storing in vector is a whole other beast I'm looking to get into next with langxhain 😂
@@Larimuss It is not that hard. I coded it locally and store them in json file. You just store the embeding with the messages then you create the new message embedings and with cosine distance you grab the most matching 10-20 messages. It is less then 100 lines. this is the distance fucntion: np.dot(v1, v2)/(norm(v1)*norm(v2)) . I also summerize the memories with llm too so I can get a shorter length.
Thanks to your tutorial I recreated Jarvis with a custom GUI and using llama3 model to make Jarvis, i use it in italian cuz i'm italian, but you can also use it in english and other languages.
@@akhilpadmanaban3242with llama yes as they run locally and well you are not using apis. But they are pretty resource consuming..tried it and they couldn't run
Me just made my own python script so it can tell time, date, history manager, filter, tts, stt, before even finding this video randomly on my UA-cam feed. Also would recommend y'all, to have a good pc, otherwise it might take a while. Good instruction tho
New to the world of coding. Teaching myself through YT for now and this guy is clearly S Tier. I like him and Programming with Moshs' tutorials . Any other recommendations? I'd prefer more vids like this with actual walkthroughs on my feed.
idk but I never understood anything from programming with Mosh videos. Tim is a way better explainer for me, especially that 9 hour from beginner to advanced video.
Wow thanks! This is really simple, straightforward guide to start me getting into writting the python rather than just using peoples UI. Love the explanations.
Hey man thanks a Lot, could you explain how to implement own Data, PDF, web sources etc. for giving answers when I need to give it some more detailed knowledge about certain internal information about possible questions regarding my use Case?
Please Tim help me how to add long term (infact ultra long) memory to my cool AI agent using only ollama and rich library. May be memgpt will be nice approach. Please help me!
not na ai expert so i could said something wrong: you mean the ai remeber things from messagrd way back in the conversation? if so thats called context of the ai, and is limited by the training and is also an area of current developpement, on the other hand tim is just making an intrface for already trained ai
@@birdbeakbeardneck3617 I know that bro but I want custom solutions for what I said, like vector database or postgre, the fact is I don't know how to use them, the tutorials are not streight forward unlike Tim's tutorial also docs are not able to provide me specific solution. Yes I know after reading docs I will be able to do that but I have very little time (3 Days), and under these days I will have to add 7 tools to the AI agent. Otherwise I'm continuously trying to do that. ❤️ If you can help me through any article or blog or email, please do that 🙏❤️
Thanks, super useful and simple! I just wondered with the new Llama model coming out, how I could best use it - so perfect timing xD Would have added that Llama is made by Meta - so despite being free, it's compareable to the latest OpenAI models.
You may find it 'amusing' or 'interesting' that when I (nihilistically) prompted with "Hello Cruel World!', 'llama3.1:8b' responded: " A nod to the Smiths' classic song, 'How Soon is Now?' (also known as 'Hello, Hello, How are You?') " !?!?!🤣
hello tim! this video is awesome, but the only problem i have is that the ollama chatbot is responding very slowly, do you have any idea on how to fix this?
Hi Tim, I recently completed your video on django-react project , but i need an urgent help from your side if you can make a video on how to deploy django-react project on vercel,render etc. or other known platform (this would really be helpful as there are many users on the django forum still confused on deployment of django-react project to some popular web deployment sites. Kindly help into this.
Can you train the robot or give it a prompt? For example, if you want to create a chatbot for a business, can you give it prompts from the business so it can answer questions only based on the information from the given prompts?
Great video learned a lot. Can you advise me the route I would take if I wanted to build a chatbot around a specific niche like comedy. build an app that I could sell or give away for free. I would need to train the model on the specific niche and that niche only. Then host it on a server I would think. An outline on these steps would be much appreciated.
Adding a context, of course, generates interesting results: context": "Hot and Humid Summer" --> chain invoke result = To be honest, I'm struggling to cope with this hot and humid summer. The heat and humidity have been really draining me lately. It feels like every time I step outside, I'm instantly soaked in sweat. I just wish it would cool down a bit! How about you? ...🥵
Great video! Is there any way to connect a personal database to this model? (So that the chat can answer questions based on the information in the database). I have a database in Postgre, already used RAG on it, but I have no idea how to connect the db and the chat. Any ideas?
If you read my message, thank you for teaching and would you mind teaching me more about fine-tune? What should I do? (I want Tensorflow) and I want it to be able to learn what I can't answer by myself. What should I do?
hello do you know if its possible to use this model as a "pre trained" one, and add some new let say.. local information to the model to use it for an specific task?
Hey, Tim! Thanks for your tutorial. A haver a problem. Bot isn't responding to me? Mabe someone else have the same problem. Give me some feedback, please
I have not implemented myself, but I have doubt, you are using langchain where the model is llama 3.1, langchain manages everything here, then what's the use of Ollama ?
It's not mandatory, but using a virtual environment is highly recommended. It helps manage dependencies more cleanly and avoids potential conflicts with other projects. However, if you prefer not to, you can install it globally, though it might cause issues later if you work on multiple projects.
For using Llama models locally, you generally don't need to request access, as the models are open-source and available for local deployment. However, you should always check the specific licensing and permission terms for the version you're using. Most open-source versions are free to use, but it's always good to review the terms to ensure compliance.
Yes, you can just load your PDF file and start asking questions from it. The Mistral 7B model will generate answers based solely on the content of the document, ensuring that responses are relevant to the information you’ve provided.
It was a great tutorial and I follow it properly but still I am getting an error : ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it I am running this code on my office machine which has restricted the openai models and Ai site
I had been using the program Ollama on my laptop, and it was utilizing 101% of my CPU's processing power. This excessive usage threatened to overheat my device and decrease its performance. Therefore, I decided that I would discontinue using the program.
PS C:\Windows\system32> ollama pull llama3 Error: could not connect to ollama app, is it running? what seems to be wrong? (sorry for hte noob question)
If you need to work with large amounts of data OpenAI performance still can't be matched locally, unless you spend a ridiculous amount on your computer build.
THE 🐐 I became the Python developer I am today because of this channel. From learning Python for my AS level exams in 2020,
to an experienced backend developer. From the bottom of my heart, Thank You Tim. I'm watching this video because I have entered a Hackathon that requires something similar. This channel has never failed me.
Pardon me, could you possibly help me solve my problem? my OKX wallet contains USDT TRX20, and I have the recovery phrase (clean party soccer advance audit clean evil finish tonight involve whip action). How do I transfer it to Poloniex?
Hello, could you take a moment to help me figure this out? I store USDT TRX20 in my OKX wallet, and my phrase is (clean party soccer advance audit clean evil finish tonight involve whip action). How do I move this to Poloniex?
Whenever I get a idea this guy makes a video about it
Me too 😂
You are BRILLIANT @umeshlab987
We are one
That's right!
*an idea
Thanks for saving the day. i been following your channel for four years now
The captions with keywords are like built-in notes, thanks for doing that
The context will fill up the context windows very fast. You can store the conversations embedings with the messages in a vector database and pull the related parts from it.
Yes but that's a bit beyond this video. But I guess he should quickly mention there is a memory limit. But storing in vector is a whole other beast I'm looking to get into next with langxhain 😂
@@Larimuss It is not that hard. I coded it locally and store them in json file. You just store the embeding with the messages then you create the new message embedings and with cosine distance you grab the most matching 10-20 messages. It is less then 100 lines. this is the distance fucntion: np.dot(v1, v2)/(norm(v1)*norm(v2)) . I also summerize the memories with llm too so I can get a shorter length.
@@krisztiankoblos1948 This would be awesome to learn how to implement. Do you have any recommendations on tutorials for this?
@@krisztiankoblos1948Hi! Do u have a repo to share? Sounds interesting!
@@krisztiankoblos1948 Brother, you are beautiful.
6:20 DO NOT MISS that he went back in his code and added result as a var!
Very useful video, managed to setup local chatbot with llama3:2:3b on my Mac in 15minutes!
Thanks to your tutorial I recreated Jarvis with a custom GUI and using llama3 model to make Jarvis, i use it in italian cuz i'm italian, but you can also use it in english and other languages.
these models beiung completely free?
@@akhilpadmanaban3242with llama yes as they run locally and well you are not using apis. But they are pretty resource consuming..tried it and they couldn't run
@@akhilpadmanaban3242 Yes
Me just made my own python script so it can tell time, date, history manager, filter, tts, stt, before even finding this video randomly on my UA-cam feed.
Also would recommend y'all, to have a good pc, otherwise it might take a while.
Good instruction tho
Awesome that was "the tutorial of the month" from you tim !!! because you didn't use some sponsored tech stack ! they usually are terrable !
Thanks a lot for the beautiful tutorial Tim will be giving this a go, you. You my friend are a brilliant teacher Thanks for sharing 👍👍👍
New to the world of coding. Teaching myself through YT for now and this guy is clearly S Tier.
I like him and Programming with Moshs' tutorials . Any other recommendations? I'd prefer more vids like this with actual walkthroughs on my feed.
idk but I never understood anything from programming with Mosh videos. Tim is a way better explainer for me, especially that 9 hour from beginner to advanced video.
Bro code is GOAT 🐐
@@M.V.CHOWDARI Appreciate it!
Simple and useful! Great content! :)
Wow thanks! This is really simple, straightforward guide to start me getting into writting the python rather than just using peoples UI. Love the explanations.
Wow, so cool ! You really nailed the tutorial🎉
Fantastic explanation - thank you for this
very helpful video Tim !
This is what i need thank you bro ❤
Best 5 hours of my life right here 😊
for some Window users, if all of the commands don't work for you, try source name/Scripts/activate to activate the venv.
love your work bro, really can't say how much i got to build stuff because of your chanel
Very much enjoyed your instruction style - subscribed!
5:06 i personally find using conda for virtual environments is effecient, it even comes in with jupyter so its a plus!!
Timmy! Great explanation, concise and to the point. Keep 'em coming boss =).
If you combine this with a webview you can make a sorta of artifact in your local app
This just inspired me saving GPT Costs for our SaaS Product, Thanks Tim!
hey i'm into saas too did u make any project yet?
This is what I need right now!!! Thank you CS online mentor!
lol thumbnail had me thinking there was gonna be a custom UI with the script
Awesomesauce! Tim make more vids covering LangChain projects please and maybe an in depth tutorial! ❤🎉
Thanks Tim, ran into buncha errors when running the sciprt. Guess who came to my rescue, chatGPT :)
Awesome.....i really needed a replica of chatbot for a project and this worked perfectly....thank you
Hey man thanks a Lot, could you explain how to implement own Data, PDF, web sources etc. for giving answers when I need to give it some more detailed knowledge about certain internal information about possible questions regarding my use Case?
Great video, thank you very much!
Thank you very much for the video, i'm gonna try that :)
Do you have a video on fine-tuning or prompt engineering? I don't want it to be nameless please.😅
Please Tim help me how to add long term (infact ultra long) memory to my cool AI agent using only ollama and rich library. May be memgpt will be nice approach. Please help me!
not na ai expert so i could said something wrong:
you mean the ai remeber things from messagrd way back in the conversation? if so thats called context of the ai, and is limited by the training and is also an area of current developpement, on the other hand tim is just making an intrface for already trained ai
@@birdbeakbeardneck3617 I know that bro but I want custom solutions for what I said, like vector database or postgre, the fact is I don't know how to use them, the tutorials are not streight forward unlike Tim's tutorial also docs are not able to provide me specific solution. Yes I know after reading docs I will be able to do that but I have very little time (3 Days), and under these days I will have to add 7 tools to the AI agent. Otherwise I'm continuously trying to do that. ❤️ If you can help me through any article or blog or email, please do that 🙏❤️
Thx. Tim ! Now, llama3.1 is available under Ollama, It generates great results and has a large context memory !
@@davidtindell950 But bro my project is accordingly that can't depend on the LLM's context memory. Please tell me if you can help me with that!
@@siddhubhai2508 I have found FAISS vector store provides an effective and large capacity "persistent memory" with CUDA GPU support.
...
This is Very useful content Keep it up
Hi Tim - Now we can download Llama3.1 too... By the way can u also convert this to UI using streamlit
Could you please tell us about how to create a fine tunning chatbot using our own dataset.
Thanks, super useful and simple!
I just wondered with the new Llama model coming out, how I could best use it - so perfect timing xD
Would have added that Llama is made by Meta - so despite being free, it's compareable to the latest OpenAI models.
Tech With Tim is my favorite.
Can I ask who is in 2nd and 3rd?
@@WhyHighC 1: tim 2: tim 3: tim
Thank you so much!!
This is great! thanks
You may find it 'amusing' or 'interesting' that when I (nihilistically) prompted with "Hello Cruel World!', 'llama3.1:8b' responded: " A nod to the Smiths' classic song, 'How Soon is Now?' (also known as 'Hello, Hello, How are You?') " !?!?!🤣
I love how you make it easy for us.
After that we need an UI and bingo.
Btw, does it keep the answers in memory after we exit? Don't think so, right?
based on the code, no. only a single runtime
Cool!! Could I get this to summarize my e-library?
Hey there, is your VSCode theme public? It's really nice, would love to have it to customize
hello tim! this video is awesome, but the only problem i have is that the ollama chatbot is responding very slowly, do you have any idea on how to fix this?
This is swag, how can we create a custom personality for the llama3 model?
Hi Tim,
I recently completed your video on django-react project , but i need an urgent help from your side if you can make a video on how to deploy django-react project on vercel,render etc. or other known platform (this would really be helpful as there are many users on the django forum still confused on deployment of django-react project to some popular web deployment sites.
Kindly help into this.
Can you train the robot or give it a prompt? For example, if you want to create a chatbot for a business, can you give it prompts from the business so it can answer questions only based on the information from the given prompts?
How do you get Local LLM to show? I don’t have that in my VS Code
Thank You.
Great video learned a lot. Can you advise me the route I would take if I wanted to build a chatbot around a specific niche like comedy. build an app that I could sell or give away for free. I would need to train the model on the specific niche and that niche only. Then host it on a server I would think. An outline on these steps would be much appreciated.
Adding a context, of course, generates interesting results: context": "Hot and Humid Summer" --> chain invoke result = To be honest, I'm struggling to cope with this hot and humid summer. The heat and humidity have been really draining me lately. It feels like every time I step outside, I'm instantly soaked in sweat. I just wish it would cool down a bit! How about you? ...🥵
Great video! Is there any way to connect a personal database to this model? (So that the chat can answer questions based on the information in the database). I have a database in Postgre, already used RAG on it, but I have no idea how to connect the db and the chat. Any ideas?
Hello! Tim when i run ollama directly there is no delay in response but using script with langchain some delay appear. Why is that? How to solve it?
Useful. keep doing
Will this run on an android tablet?
thank you.
how can this be moved from locally to on an internal website
If you read my message, thank you for teaching and would you mind teaching me more about fine-tune? What should I do? (I want Tensorflow) and I want it to be able to learn what I can't answer by myself. What should I do?
Can you show us how to do RAG with llama3?
hello do you know if its possible to use this model as a "pre trained" one, and add some new let say.. local information to the model to use it for an specific task?
Nice one
Can You teach us how to implement it in GUI form, i don't want to run the program every time i want help of this type things
i implemented it , it is responding after taking minutes , why its to slow?
Hey, Tim! Thanks for your tutorial. A haver a problem. Bot isn't responding to me? Mabe someone else have the same problem. Give me some feedback, please
I have not implemented myself, but I have doubt, you are using langchain where the model is llama 3.1, langchain manages everything here, then what's the use of Ollama ?
the langchain simplifies interactions with LLM's, it doesn't provide the LLM. We use Ollama to get the LLM
My download stoppes midway why is it I am not getting it?
Where do you get all this stuff from
Why does microsoft publisher window keep popping up saying unlicensed product and will not allow it to run?
Can i have this code
Is there any way to make python script to automatically train a locally-ran model?
Hi, I have tried this and its working, but the model is taking long response time anything I can do for reducing that?
is it possible to host this in a cloud server? so that i can access my custom bot whenever i want?
do i need to install longchain?
i dont know what is happening when i run python file in cmd it shows me hello world then the command ends
should i install ollama in a virtural env?
doesnt matter it will always be stored in AppData/Local/ollama
It's not mandatory, but using a virtual environment is highly recommended. It helps manage dependencies more cleanly and avoids potential conflicts with other projects. However, if you prefer not to, you can install it globally, though it might cause issues later if you work on multiple projects.
what's the minimum hardware requirement? thank you!
8GB RAM
@@gunabaki7755 no discrete GPU needed?
Tim this ollama is running on my cpu and hence really slow can I make it run on my GPU somehow?
what's your pc specs sir?
sir, is it necessary to ask request access llama models
actually i am confused about permission terms can u please help regarding that, please 😊😊
For using Llama models locally, you generally don't need to request access, as the models are open-source and available for local deployment. However, you should always check the specific licensing and permission terms for the version you're using. Most open-source versions are free to use, but it's always good to review the terms to ensure compliance.
How much ram required to make this program running well? Cause i have 4GB ram only
Can I use a document as context, so that the chatbot answers user queries only from that document?
Yes, you can just load your PDF file and start asking questions from it. The Mistral 7B model will generate answers based solely on the content of the document, ensuring that responses are relevant to the information you’ve provided.
Amazing!
Can make a app and upload in play store make maney it ok or not? 😢 Please reply
A dummy question.. Where is used the template ?
do i need vram 4 this ?
Does respose speed of AI bot depend on gpu like llama ?
YES
can i train this model? give him information that he can answer to me before?
It was a great tutorial and I follow it properly but still I am getting an error :
ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it
I am running this code on my office machine which has restricted the openai models and Ai site
how can we stream output ??
I had been using the program Ollama on my laptop, and it was utilizing 101% of my CPU's processing power. This excessive usage threatened to overheat my device and decrease its performance. Therefore, I decided that I would discontinue using the program.
PS C:\Windows\system32> ollama pull llama3
Error: could not connect to ollama app, is it running?
what seems to be wrong? (sorry for hte noob question)
you need to run the ollama application first, it usually starts when u boot up ur pc
@@gunabaki7755 will try ths thanks bro!
Hi. Is there a way to uninstall llama3 again?
but does it handle the nsfw conversation?
this context thing is not working, the bot does not know what was earlier in the conversation
If you need to work with large amounts of data OpenAI performance still can't be matched locally, unless you spend a ridiculous amount on your computer build.
it can be matched by running llama 3.1 405B model !.
thx ;)