Please check other videos in this channel, here u go -> Chat With Documents Using ChainLit, LangChain, Ollama & Mistral 🧠 ua-cam.com/video/2IL0Sd3neWc/v-deo.html
Hey mate, very good video with clear explanation. Do u mind to share what terminal app you are using? seems like very convenient with all that autocompletion and some hints
I want to finetune the llama 3 but I need to crate the special_tokens_map.json as follows: { "bos_token": { "content": "", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false }, "eos_token": { "content": "", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false }, "pad_token": { "content": "", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false } } How can I do? Moreover I want to have a ollama run the model to have a chat with the model.
hello, deploying a model is use case specific. It can be deployed locally, in different cloud services , etc. Please refer to other videos in my channel for help.
gguf can be run directly with ollama, how to convert other formats to be converted to use with ollama
?
Thanks for sharing it, could you please create a video where we can do Q&A with local documents using Ollma models
Kindly do it , it’s really needed
Please check other videos in this channel, here u go ->
Chat With Documents Using ChainLit, LangChain, Ollama & Mistral 🧠
ua-cam.com/video/2IL0Sd3neWc/v-deo.html
Parabéns, conteúdo excelente! Obrigado por compartilhar.
It would be interesting to see if you could Open-webUI instead of "Chat UI using ChainLit, LangChain, Ollama and Gemma"
Excellent tutorial !!
Thanks !!
Hey mate, very good video with clear explanation. Do u mind to share what terminal app you are using? seems like very convenient with all that autocompletion and some hints
I am using Warp but you can watch this video for making terminal better
HOW To Make Your Mac Terminal Amazing
ua-cam.com/video/ycapVWVl98M/v-deo.html
Good explanation! Is there a list of model architectures that are supported by Ollama?
I don’t think there is any architecture for Ollama, you can download any models and use with Ollama. Give a try !!
Phenomenal
How can I edit the Modelfile, so that it includes within it a context, a personality, or a precise way of answering questions?
Is there any way we can use Exl2 on Ollama?
Thank you
You are welcome !!
Nice video,
Can you make a video changing the system-prompt, temperature
If its possible.
Thankyou
Its possible, just provide those parameters in Modelfile.
Could you please use llama index with hugging face
how do i pimp my chainlit logo to mine to change it please
I want to finetune the llama 3 but I need to crate the special_tokens_map.json as follows:
{
"bos_token": {
"content": "",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}
How can I do?
Moreover I want to have a ollama run the model to have a chat with the model.
good, how to create the Modefile on windows?😁
it should work in the same way, hive a try !!
how to create the Modefile on windows?
Just open vscode and create a file called Modelfile (not sure about capitalization) and insert the content into it. It doesnt need any extension
hello sir can we deploy this model
Yes you can !!
@@datasciencebasics Sir if possible please can u create one tutorial for deploying this model
hello, deploying a model is use case specific. It can be deployed locally, in different cloud services , etc. Please refer to other videos in my channel for help.
Far more hassle than it's worth. Just use any other app that uses normal GGUF files, like normal people.
Thanks for the comment. There are many ways to do and its just a preference which one to use🙂
Excellent tutorial !!
Glad you liked it!