Thank you very much for nice video. How can i apply two or more csv files in this pandasai and ollama? Ex) Comparing two or more csv file and finding the join of the csv files.
Great video ! I have a setup on my local streamlit but I will deploy it on AWS to keep it private in my company. But how to handle the problem of the logs and the plotting of chart ? (Because it's locally stored) I would like to know if someone already try to deploy such an app. Thanks
Hi @TirendazAI, Have been following your channel closely as very few people cover Pandas AI using local LLMs, so thankyou for that. Question, how do you improve accuracy? I have been using groq (Mixtral) and Mistral locally and I always find the responses to be inconsistent as the llms are generating a different code each time. Like it'll fail to generate the right response on simple 'What is the population of country xyz?' 3-4 out of 10 times. Whats the secret of ensuring consistent response? I have disabled caching in Pandas AI to test how accurate the llms are and I am getting disappointed each day. It'll be sad if I have to write all the methods/functions myself (as skills for the agent) and merely have the llm map to that correctly. Is there a better fine tuned open source llm for Pandas AI?
Hi @ialbd, thanks for your feedback. Unfortunately, Pandas AI is a black box. It doesn't always give the same results. I'm also trying several prompts to get the output I want. We can run open source models locally. But for now, I have experienced that Anthropic and OpenAI models for PandasAI give better results than open source models.
@@TirendazAI hi again. Do you think Pandas AI is really a production ready library? I was reading their documentation and with all the options like train/skills etc that are required to make any llm return consistent result simply defeats the purpose of using something like pandas ai. If I have to write all the functions upfront, I can use function calling features of open ai/llama 3. Anyways, pandas ai has potential, but maybe with proper funding/backing it can be something in a year. But right now the effort to make it production ready and consistent it needs the same amount of effort of a custom RAG based application.
I recommend the 7B models such as Llama2 and Gemma. Depending on your project, you can select a 7b model. For example, you can leverage the Llama2 open-source model for chat.
hi great, any What can be done so that the answers are in a more natural language but analyzing the dataframe, as for someone who has no idea what pandas or dataframes are?. you get one sub more
Thanks for your feedback. I think that to benefit from the power of generative AI, it is necessary to have basic knowledge about data analysis and pandas.
i think before you start the video about local llm you need to precise that this application need a high level computer using gpo otherwise to ask the model a simple question takes hours to answer..it works on your side because you have high level computer with big ram more than 40gb and gpo card...great video but app not working for all viewers..
Thank you very much for nice video. How can i apply two or more csv files in this pandasai and ollama?
Ex) Comparing two or more csv file and finding the join of the csv files.
Great video ! I have a setup on my local streamlit but I will deploy it on AWS to keep it private in my company. But how to handle the problem of the logs and the plotting of chart ? (Because it's locally stored) I would like to know if someone already try to deploy such an app. Thanks
Hi @TirendazAI,
Have been following your channel closely as very few people cover Pandas AI using local LLMs, so thankyou for that.
Question, how do you improve accuracy? I have been using groq (Mixtral) and Mistral locally and I always find the responses to be inconsistent as the llms are generating a different code each time. Like it'll fail to generate the right response on simple 'What is the population of country xyz?' 3-4 out of 10 times.
Whats the secret of ensuring consistent response? I have disabled caching in Pandas AI to test how accurate the llms are and I am getting disappointed each day.
It'll be sad if I have to write all the methods/functions myself (as skills for the agent) and merely have the llm map to that correctly.
Is there a better fine tuned open source llm for Pandas AI?
Hi @ialbd, thanks for your feedback.
Unfortunately, Pandas AI is a black box. It doesn't always give the same results. I'm also trying several prompts to get the output I want.
We can run open source models locally. But for now, I have experienced that Anthropic and OpenAI models for PandasAI give better results than open source models.
@@TirendazAI hi again. Do you think Pandas AI is really a production ready library?
I was reading their documentation and with all the options like train/skills etc that are required to make any llm return consistent result simply defeats the purpose of using something like pandas ai.
If I have to write all the functions upfront, I can use function calling features of open ai/llama 3.
Anyways, pandas ai has potential, but maybe with proper funding/backing it can be something in a year. But right now the effort to make it production ready and consistent it needs the same amount of effort of a custom RAG based application.
I am using mistral model by ollama but my application is running very slow is it because of CPU, do you have GPU in your system
Yes, I have GPU with 16gb vram. You can use smaller model such as phi3.
Can suggest open source LLM 7b to use instead of mitral? (like starcoder, phi, pandalytic,... etc)
I recommend the 7B models such as Llama2 and Gemma. Depending on your project, you can select a 7b model. For example, you can leverage the Llama2 open-source model for chat.
@@TirendazAI thanks for reply and your time
nice bro. but i got problem, when i press generate button, i got this error:
APIConnectionError: Connection error
Can PandasAI be used with agents like in CrewAI? Can it also include RAG?
For RAG, check this: docs.pandas-ai.com/en/latest/train/#train-with-your-own-settings
hi great, any What can be done so that the answers are in a more natural language but analyzing the dataframe, as for someone who has no idea what pandas or dataframes are?.
you get one sub more
Thanks for your feedback. I think that to benefit from the power of generative AI, it is necessary to have basic knowledge about data analysis and pandas.
i think before you start the video about local llm you need to precise that this application need a high level computer using gpo otherwise to ask the model a simple question takes hours to answer..it works on your side because you have high level computer with big ram more than 40gb and gpo card...great video but app not working for all viewers..
Thanks for your feedback. I'll take your comment into consideration.
Excellent !!!
Glad you like it!
Awesome ❤❤❤
Thanks 🙏
you genius!!!
Thanks 😀😀
Nice
Thanks 😀