Hi , for sure a great video from you. how do you find the responses of the GPT after integrating it with pinecone database does it worked better with the retrieval network ? and do you know the exact limit of the knowledge base ?
Hi Using RAG with ChatGPT makes the response more related to your knowledge base. It is same as using OpenAI APIs with Pinecone:- ua-cam.com/video/r_W0cnwaLQo/v-deo.html ua-cam.com/video/rTTRKsV1vP0/v-deo.html Built-in knowledge storage of hatGPT has a limit of 10MB (or something similar). But by using Pinecone you can extend it to unlimited (Now Pinecone have pay as you go serverless model).
Hi im receiving a error on the project. can you please help me on this "IOT Based Water Level Controller Using NodeMCU ESP8266" - Compilation error: cannot convert 'const char [57]' to 'FirebaseConfig*' {aka 'firebase_cfg_t*'}
Hi There are some changes in that library. Checkout my last reply in this thread for solution:- gist.github.com/TrickSumo/2f16c122b2f59c1e7b0846514dd945cd
I have a pinecone index already set up in pinecone that I would like to connect but am receiving 'Internal Server Error' from my postman when I try to run it. The pinecone db has the fields 'id', 'text', and 'source'. Do you know why I might be getting this error?
Hi Do you mean how to keep Render server running always? Looks like in the free version they turn off server after some time of inactivity. Then if you try to access it, first time there will be some delay as the server boots up. One option is to get a paid plan from Render. Or use cloud service like AWS EC2.
u da best man! Followed and worked! with some modifications ofc
Awesome 🚀😊🎉
Hi , for sure a great video from you. how do you find the responses of the GPT after integrating it with pinecone database does it worked better with the retrieval network ? and do you know the exact limit of the knowledge base ?
Hi
Using RAG with ChatGPT makes the response more related to your knowledge base.
It is same as using OpenAI APIs with Pinecone:-
ua-cam.com/video/r_W0cnwaLQo/v-deo.html
ua-cam.com/video/rTTRKsV1vP0/v-deo.html
Built-in knowledge storage of hatGPT has a limit of 10MB (or something similar). But by using Pinecone you can extend it to unlimited (Now Pinecone have pay as you go serverless model).
Hi im receiving a error on the project. can you please help me on this "IOT Based Water Level Controller Using NodeMCU ESP8266" - Compilation error: cannot convert 'const char [57]' to 'FirebaseConfig*' {aka 'firebase_cfg_t*'}
Hi
There are some changes in that library.
Checkout my last reply in this thread for solution:- gist.github.com/TrickSumo/2f16c122b2f59c1e7b0846514dd945cd
I have a pinecone index already set up in pinecone that I would like to connect but am receiving 'Internal Server Error' from my postman when I try to run it. The pinecone db has the fields 'id', 'text', and 'source'. Do you know why I might be getting this error?
Hi
Are you trying to connect to pinecone API using postman?
Or render is giving 500 internal server error?
How to keep it running for lifetime ?
Hi
Do you mean how to keep Render server running always?
Looks like in the free version they turn off server after some time of inactivity. Then if you try to access it, first time there will be some delay as the server boots up.
One option is to get a paid plan from Render. Or use cloud service like AWS EC2.