how much video card do we all need for local run with 1 sec responses with ollama and run locally. changing the llms.OpenAi in llms.Ollama . Need embeding settings and llm settings. and change the OpenAi ctx.data["llm"] = Ollama.. it does take as params model the local stored and temperature that 0.1 gives good output. Thanks for the video and have fun . Making and experiment thanks
Yes, thanks for this video. Well done and instructive :)
This example is exactly what I needed to take my problem to the next level. Thank you!
Awesome 🎉 I’m working on this problem right now, helps a lot
These videos are gold. Such useful content keep it up ! Thanks
This is great! How can I print which chunks or documents have been in the top 20 that the High Top K Strategy returned?
It's in response.metadata
That's really cool
how much video card do we all need for local run with 1 sec responses with ollama and run locally. changing the llms.OpenAi in llms.Ollama . Need embeding settings and llm settings. and change the OpenAi ctx.data["llm"] = Ollama.. it does take as params model the local stored and temperature that 0.1 gives good output. Thanks for the video and have fun . Making and experiment thanks