Llamafile on Android Tutorial
Вставка
- Опубліковано 30 чер 2024
- llamafile github - github.com/Mozilla-Ocho/llama...
(releases on the right side)
tinyllama - huggingface.co/TheBloke/TinyL...
➤ Twitter - / techfrenaj
➤ Twitch - / techfren
➤ Discord - / discord
➤ TikTok - / techfren
➤ Instagram - / techfren - Наука та технологія
Just saw your PR. Greatly appreciated your contribution to the project so far, especially with the cosmo backend.
Debugging with Justine!! Well done dude, so happy you got this working
@@MikeBirdTech 🥳🥳🥳
Great video, thanks a lot!
@@wardehaj you're welcome! Thank you for commenting!
Using Ollama, whats the benefit of llamafile? (Not including android use)
Is there a recommendation hybrid approach?
Thanks for the bite size clip! ❤
its a small file that is portable and usable on different operating systems. its also known to be faster on some architectures. comes with its own WebUI and can host inference server
@@techfrenlove how it just runs the local server, that's awesome. Can this be bundled and run in the back with a "front" native android app of our making?
Makes me think about making sure that ai pipelines are built to be xross platform, and instantly transferable.
Regarding llamafile, the only thing it doesn't have Ollama's amazing pull feature. I wish someone made a thin wrapper above ollama and llamafile to get best of both worlds
@@fire17102 yeah I think bundling llamafile would be much easier than other methods. What's ollama pull method
@@techfren I meant that you can just do ollama run and it will pull and dl the model for you, easy as pie. Llamafile needs that..