Thanks for the feedback. I am creating a new video in that you can access all llama3.1 model using GroqCloud and the LLM runs in just one second. It is totally free to get the API key.
If you have Ollama installed then you can simply type 'ollama run llama3.1' in the cmd and it will start downloading automatically the Llama 3.1 in your system. It can take upto 4GB space.
I'm GEETING ERROR The `__modify_schema__` method is not supported in Pydantic v2. Use `__get_pydantic_json_schema__` instead in class `SecretStr`. How to solve it ?
This can happen multiple times as the model is too large. Please try more or use groq cloud free api key to use llama 3.1 for free. I made a video on it recently.
Great overview, thanks!
Thank You ! Yet Again !! I did use Ollama 'llama3.1:8b' and it answered several queries quite well !
Thank you too! For sharing the feedback.
I confirmed that the query did indeed make use of my local NVidia GPU and so was fairly quick but not very fast !
Thanks for the feedback. I am creating a new video in that you can access all llama3.1 model using GroqCloud and the LLM runs in just one second. It is totally free to get the API key.
There's a step that i may have missed. Do you have the LLama 3 model installed on your local machine?
If you have Ollama installed then you can simply type 'ollama run llama3.1' in the cmd and it will start downloading automatically the Llama 3.1 in your system. It can take upto 4GB space.
I'm GEETING ERROR The `__modify_schema__` method is not supported in Pydantic v2. Use `__get_pydantic_json_schema__` instead in class `SecretStr`. How to solve it ?
Are you still facing error?
Im having compatibility issues, could you share your python environment or your libraries' versions?
I am using Python 3.11.7
So when we execute llm = Ollama(model="llama3.1", request_timeout=420.0), is this mean that we need to deploy Ollama in local PC and pull llama3.1?
Yes
Is there any chance to increase the speed of response without GPU?
No, GPU is must or a good configuration machine.
Doew it work with pdf images of charts and tables
Yes, it works with all types of data.
I downloaded the model using ollama on my internet system but how to move the model files to intranet environment?Please help
This should not be a problem. You can use it.
Hii,
I am not getting the response.
Getting connection refused error.
Please help to solve this
This can happen multiple times as the model is too large. Please try more or use groq cloud free api key to use llama 3.1 for free. I made a video on it recently.
getting this error when querying :
ValueError: Expected where to have exactly one operator, got {} in query."
Unfortunately, I also received this error message!
The same error
I am not sure if something changed. It worked for others earlier.