Hi Krish, The content you give is excellent for all sets of students and working employees. However it would be great if you could make a video on different research paper topics on the latest Gen AI models please.
This was a great video, thank you. Just curious to why shall we use this when ChatGPT does the same thing? Also can it be a RAG application where maybe you can fine tune it for your company database?
Krish, thanks for your time. I failed to understand the pragmatic use case to run this locally and generate code as I use other AI tools available in the cloud with better GPU. Are we not just building wrappers on top of these models? I find distributed computing is still interesting domain to work on considering challenges around app design, communication, domain modeling, building resilience and scaling for web.
Hi Sir , Thanks for sharing it. What is the basic application of it , for any specific use cases or custom use cases. As we can generate code on other LLM platforms easily
kindly create a video on how to use Ollama on Docker and AWS ECR or Docker alone . kindly do it as it will be use full for Companies Thank you in advance #Ollama in Docker
thank you so much for sharing valuable content to sync up with latest technology, just a small favor if you can I want to create a code generation for flutter or reactjs or nextjs which ever a new language, I am looking if you can train us how to do .. along with the data set preparation this is one of the most difficult part .. thanks in advance.
That was a great video... but can you please create a tutorial like creating this application end to end like deploying it on AWS so that anyone can access it, it will be very kind of you or anyone who replies....
Krish Naik Please Answer My question here is my question: your CodeGuru give response within 5 seconds while my CodeGuru give response after 3 to 4 minutes on same prompt why?
Hello Krish Naik. There was a problem with my code in the app.py file. Whenever I tried to ask the prompt a question I would get this error: error: {"error":"model 'codeguru' not found, try pulling it first"}. The app.py file is perfectly copied from the repository so there is nothing wrong there. Anyway, thank you Krish for the tutorial.
Sir could you please make a video on table extraction task. The open source solution is table transformer by Microsoft. How to fine-tune it? Also any other free options please sir
Sir, one quick question. I can fire same query and get answer from ChatGPT also. So, what is the necessity to build this application? It is not even trained /finetuned with any custom data also. Why the Org has to spend money to build? Can you please tell me.
Please release a step by step guide on how to fine tune Gemini API in Python.....I tried by refering to documents but encountered a lot of errors with OAuth Setup please...........!!!
You are a great Teacher Krish Naik ❤
Tq teacher. You are a motivation for all of us to keep updated on newer technologies with no age limit
this is such an awesome video. thank you so much Krish bhai!
became new subscriber of you krish who liked the video and count raise too 500 ❤ your awesome
Amazing video... you are one of the fantabulous teacher...
Thank you Sirji. This video helped a lot as I was looking for this.
Your videos always make my day. Keep shining!
bro how to use url
Hi Krish, The content you give is excellent for all sets of students and working employees. However it would be great if you could make a video on different research paper topics on the latest Gen AI models please.
I wish I could give this video more than one like. It's that good!
nice one,
A question was how to move it to production OR (after proof of concept) give it to client as project ??
Thanks
Thank you for the wonderful video. Please also make a video on LLMOps (Langsmith).
This was a great video, thank you. Just curious to why shall we use this when ChatGPT does the same thing? Also can it be a RAG application where maybe you can fine tune it for your company database?
Krish, thanks for your time. I failed to understand the pragmatic use case to run this locally and generate code as I use other AI tools available in the cloud with better GPU. Are we not just building wrappers on top of these models? I find distributed computing is still interesting domain to work on considering challenges around app design, communication, domain modeling, building resilience and scaling for web.
Hi Sir , Thanks for sharing it.
What is the basic application of it , for any specific use cases or custom use cases.
As we can generate code on other LLM platforms easily
kindly create a video on how to use Ollama on Docker and AWS ECR or Docker alone . kindly do it as it will be use full for Companies
Thank you in advance
#Ollama in Docker
Thanks for the excellent video. Could please also make a video on how we can deploy this model on any cloud platform
Super😍tutorial, Thank you so much Krish sir, I have a doubt, Is it is finetunable ? can anyone answer to this
You provide amazing information
Hi Krish, Thanks for the videos. They are very helpful. do you have a tutorial for multi-modal chat interaction with a PDF with pictures ?
thank you so much for sharing valuable content to sync up with latest technology, just a small favor if you can
I want to create a code generation for flutter or reactjs or nextjs which ever a new language, I am looking if you can train us how to do .. along with the data set preparation this is one of the most difficult part .. thanks in advance.
That was a great video... but can you please create a tutorial like creating this application end to end like deploying it on AWS so that anyone can access it, it will be very kind of you or anyone who replies....
This may sound basic. But why do we do all this when we can directly search in chatgpt, gemini , Mixtral or claude 3 and get required code ?
clients dont want to get their source code get public
this is totally running on your local machine so no worries of data being compromised
Sir, You are a great Teacher.
I want one code that shows how to deploy in huggingface - spaces or any other open-source deployment platform.
Because Ollama runs on the local machine, how does it run in the deployment platform?
Krish Naik Please Answer My question
here is my question: your CodeGuru give response within 5 seconds while my CodeGuru give response after 3 to 4 minutes on same prompt why?
i guess coz of GPU power it might happen
You deserve more than a million subscribers ❤❤love from Pakistan
Hello Krish Naik. There was a problem with my code in the app.py file. Whenever I tried to ask the prompt a question I would get this error: error: {"error":"model 'codeguru' not found, try pulling it first"}. The app.py file is perfectly copied from the repository so there is nothing wrong there. Anyway, thank you Krish for the tutorial.
Sir ,how can we make the user interface looks more effective and cool?
It’s really amazing
Hey Krish, How can we upload custom data and index for Codellama?
i dont got the url api part , how to get URL form github , please help
which tut i need to watch before this pre requisite kinda??
Seems langchain is not required in the PoC that has been shared, Please confirm.
Thankyou Sir
Sir could you please make a video on table extraction task. The open source solution is table transformer by Microsoft. How to fine-tune it? Also any other free options please sir
How quick is the response ?....I have i7 processor and running llama model takes so much of time for simple queries.
Try using ur gpu , it will be way faster .
Sir does doing bca provide us with strong cs fundamentals
Sir, one quick question. I can fire same query and get answer from ChatGPT also. So, what is the necessity to build this application? It is not even trained /finetuned with any custom data also. Why the Org has to spend money to build? Can you please tell me.
In this document read llama project response time is around 12s. Can you reduce the response time? Can you teach us Or give any idea💡❤
Hello sir
When will you make the video on making jarvis ai assistant
Please release a step by step guide on how to fine tune Gemini API in Python.....I tried by refering to documents but encountered a lot of errors with OAuth Setup please...........!!!
thank you sir
krish it's very slow on my pc while running ollama in cmd , what could be the reason
Very good
Is the 3.8 gb codellama installation mandatory here ?
cool stuff !
What are the specs of your system?
Love from bomb-istan
Love from Lahore Pakistan
The url is not working.
it took me 6min on my laptop to generate a response....i think i should purchase a laptop with a good gramphics card and 16+gb ram
Sir , plz upload computer vision playlist
@krishnaik