It works on desktop , but not working on Android chrome , any possibility? Tried spinning up server both in flask and node express , in termux and other pydroid IDE . Please advice .
Hello, great tutorial, a quick question, is it possible to do function calling with this LLM which is using mediapipe inferencing. By creating a customLLM using Langchain or other frameworks
Could anyone recommend an inference solution for fine-tuned models ... Yes, I heard of qroq, but I want make use of the models I fine-tuned... I have access to Google Colab Pro, I intend to use something other than my fine-tuned models for production. The models I've fine-tuned are based on llama3-8b using the unsloth library... I tried VLLM, but it crashed.
For people facing this error: Failed to initialise task...
You have to enable webgpu. Then your application will work.
Same error, working on a windows 10 machine not sure what's the problem. Messaged Sonu on LinkedIn with screenshot
@@toniramchandani I'd suggest using chrome for this project. Your OS doesnt matter. Also check your browser console for errors.
@@akj3344 I am using chrome only I just updating on the system
Hey it's working now, i had a very stupid small error.
There is a saying don't commit small crimes I did it here 😎😄
what is webgpu? how to enable it?
It works on desktop , but not working on Android chrome , any possibility? Tried spinning up server both in flask and node express , in termux and other pydroid IDE . Please advice .
Hello, great tutorial, a quick question, is it possible to do function calling with this LLM which is using mediapipe inferencing. By creating a customLLM using Langchain or other frameworks
Will this stress client side, i mean when i tested on my browser, my browser hangs for 2 to 4 seconds and then okay
Could anyone recommend an inference solution for fine-tuned models ... Yes, I heard of qroq, but I want make use of the models I fine-tuned... I have access to Google Colab Pro, I intend to use something other than my fine-tuned models for production. The models I've fine-tuned are based on llama3-8b using the unsloth library... I tried VLLM, but it crashed.
Hey buddy does it run on just cpu?
it keeps saying: Failed to initialize the task. I tried both GPU and CPU models, the same error keeps repeating.
Same error
All such LLM cannot create an output form where any product description update information will be written.
If you can do this I will pay you
can we use it as talk with my pdf, i have 5000+ pages pdf
Better use case for Google Gemini
try ollama + anythingllm with gemma or anything small model, but i don't think 5000+ pdf can be done since... token limit...
U r fast bro
Sort