@@federicogentile90You need to run ollama serve from a terminal. The other commands in the video you can run from a Python shell/REPL, Jupyter Notebook, or a Python script.
Hello thank you for sharing! it is possible to turn it on colab ? following to a t your tutorial i v received ConnectionRefusedError: [Errno 111] Connection refused
@@mavert17 Sorry - I should have mentioned that you also need to install Ollama via the binary - ua-cam.com/video/NFgEgqua-fg/v-deo.html On this Stack Overflow post they show how you can install Ollama when you're using Google Colab. I haven't tried it out yet but might be worth checking out - stackoverflow.com/questions/77697302/how-to-run-ollama-in-google-colab
it errors for me error: Exception has occurred: AttributeError partially initialized module 'ollama' has no attribute 'pull' (most likely due to a circular import)
Yes we could do that as well. I guess the easiest way would be to load the text into a message that gets passed into the array of messages passed to the chat function. I showed how to do this with litellm (similar API to Ollama) over here - ua-cam.com/video/MiJQ_zlnBeo/v-deo.html
you need to run ollama server before . otherwise will be connection refused .
Yeh - I should have mentioned that at the start. Will do on videos going forwards
pin this message
Could you please tell me where i should run it? What should I type in the code?
@@federicogentile90You need to run ollama serve from a terminal. The other commands in the video you can run from a Python shell/REPL, Jupyter Notebook, or a Python script.
great work i have been struggling with research to use python with language models
Does this ollama Python library utilize the GPU?
It does!
Hello thank you for sharing! it is possible to turn it on colab ?
following to a t your tutorial i v received ConnectionRefusedError: [Errno 111] Connection refused
That's something I need to figure out. Ollama has a binary that you need to install - do you know how to do that on Colab?
I have the same error!
@@mavert17 Sorry - I should have mentioned that you also need to install Ollama via the binary - ua-cam.com/video/NFgEgqua-fg/v-deo.html
On this Stack Overflow post they show how you can install Ollama when you're using Google Colab. I haven't tried it out yet but might be worth checking out - stackoverflow.com/questions/77697302/how-to-run-ollama-in-google-colab
@learndatawithmark thanks for answering no I don't have any idea .
Very helpful, thank you. Is there any way to use my gpu to run the model using cuda?
I think it should do that automatically is that not what you're seeing?
it errors for me
error:
Exception has occurred: AttributeError
partially initialized module 'ollama' has no attribute 'pull' (most likely due to a circular import)
Maybe try uninstalling the Ollama library and then re-installing?
@@learndatawithmark nah i fixed it, i just called my python script Ollama.py
silly mistake i made
Does ollama python require ollama server to be running?
It does!
Beautiful!
thx bro i was struggling and then i just searched it up and it worked :D
Glad you got it working :D
ModuleNotFoundError: No module named 'ollama'
Did you do pip install ollama?
@@learndatawithmark yes i did, still same issue
@@SomethingSpiritualpython cannot find the installed library. Make sure if you are running a venv it is installed in the venv aswell.
So exciting.
Adding own document to chat with it?
Yes we could do that as well. I guess the easiest way would be to load the text into a message that gets passed into the array of messages passed to the chat function. I showed how to do this with litellm (similar API to Ollama) over here - ua-cam.com/video/MiJQ_zlnBeo/v-deo.html