Llamafile: bringing AI to the masses with fast CPU inference: Stephen Hood and Justine Tunney
Вставка
- Опубліковано 13 жов 2024
- Mozilla's Llamafile open source project democratizes access to AI not only by making open models easier to use, but also by making them run fast on consumer CPUs. Lead developer Justine Tunney will share the insights, tricks, and hacks that she and the project community are using to deliver these performance breakthroughs, and project leader Stephen Hood will discuss Mozilla's approach to supporting open source AI.
Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at www.ai.enginee... & join us at the AI Engineer World's Fair in 2025! Get your tickets today at ai.engineer/2025
About Stephen
Open source AI at Mozilla. Formerly of del.icio.us, Yahoo Search. Co-founder of Storium (AI-assisted storytelling game) and Blockboard.
About Justine
Justine is a founder of Mozilla’s LLaMAfile project, a Google Brain alumni, and the owner of the Cosmopolitan C Library. She's focusing on democratizing access to open source AI software while elevating its performance and quality.
What did I just watch ...mindblowing! Finally someone took the initiative of going against the tide while giving CPUs some attention that they have lost to the GPU madness!
This is so awesome! Just tried out llava 1.5 7b llamafile and it worked out of the box running on my CPU, without eating all of my RAM! The token generation speed was good enough for me! And my CPU is ~8 years old. Holy cow!
Where gguf?
where?
Llamafile now supports OpenAI API and non-AVX CPUs. Finally! Having the OpenAI API is a must.
These individuals are pioneers of the Personal AI. Efficient, Universal, and Economical.
loving the llamafile already. this is how i deploy local LLMs now!
Local for yourself or clients?
This was my favorite presentation!
Well, what I'm suppossed to say but: awesome...running local AI on normal consumer hardware without any worries about privacy seemed impossible just months ago. All the computational work in GPT, Gemini and others is done in the cloud on the companies servers. So you don't know, what they are doing with your data. Even if you have nothing to hide - I'm sure erveryone has certain things, he/she wants to stay private...this seems to be the right way of implementing AI in a private manner. And doing such a great afford without any commercial Interests is nothing but mindblowing. Keep up the good work, please!
Now this is called achievement. Meanwhile the so-called "open"AI is looting people. You guys are awesome
Justine just shifted the timeline 💥🔀
This is fantastic! I can't wait to try it out.
Justine an absolute champion!
This is a game-changing breakthrough. Can't underplay this any other way.
These cloud companies trying their best to keep the valuation high!!! This guy is the new CDO manager!!
Awesome, exactly what I have been looking for, no more virtual heavy environments, no more heavy nvidea cuda drivers ! Lets fricking go !!!
omg i love Justine Tunney! they are amazing!
it was the first time I had the chance to listen to one of his speeches. bro i like this guy. D:
I really like the idea of a Threadripper configuration but... does anyone have a reference machine configuration for that? I'd like to compare the price to existing alternatives like the dual RTX4090 setup that is mentioned!
He said,, "Who remembers using the original Netscape Navigator?" ........to that I say, who remembers using the original Mosaic browser? And then telnet before the graphical internet?
[raises hand] Doh!
“Who remembers the handshaking tone in dial-up process” 😂
Who remembers the original smoke signals?
How about BTX?
We do! and Gopher!
This is utterly brilliant. What a fantastic presentation. Amazing project.
well... I just took a look at the repo for the llama 3 70b llamafile repo and found this info about performance:
"AMD Threadripper Pro 7995WX ($10k) does a good job too at 5.9 tok/sec eval with Q4_0 (49 tok/sec prompt). With F16 weights the prompt eval goes 65 tok/sec."
70b would be the lower bound for model I would enjoy using, but getting like 6 tokens per second output on a 10k$ CPU... At that point I could just as well build a GPU machine...
So, even though I think this is in concept an amazing project, either it or hardware in general has a long way to go still before it is in my opinion usable for an average person such as myself..
(I'm assuming the performance data on the huggingface repo are at least somewhat accurate and not outdated)
Awesome. Great project and presenters!
Absolutely fascinating and totally genius
This is absolutely amazing!
Refreshing indeed - tokens per seconds is one measure and I like eval speed but what and how do you measure that?
What about quality trade-off? Did they mention about that?
Is there a way to get Windows to run llamafiles bigger than 4Gb? Without being able to do that, it is very limiting in the models you can run.
Amazing. Thank you.
Awesome work!
All of this was already possible before... Already back in early 2023. What they did was just save you 15 minutes (otherwise you'd have to download an inference program and weights separately)
Wow! Really great work!
this is just fantastic.
Justine is a GOAT
Freedom and Justices are more expensive than Money and Power. No one live and rule forever.
Respects and Salute to you guys...
Amazing❤
Love it!!
This is better than the Nvidia NIM solution (which is just containerization). Way better ..
Awesomesauce
Oh shit, CPU prices is going to hike
very cool
this is future
👏🏻👏🏻👏🏻
*checking on RAM prices*
Why is no one talking about this?
The Singularity is here.
this guy is a fkn rockstar on stage I was totally blown away 🎉
Tired -> wired around @9:30 😂
Don't forget the browser.
I need an AI that can access the files on my hard drive. Does anyone have a suggestion? I don't want to upload them to the AI. I want the AI to access them directly.
ChatGPT4all has RAG
well this feels like something out of left field.🤷♂️
Seems too good to be true. What are the catches?
As an AI Design Engineer and developer of original works in Unified Language Models (predecessor to LLMs) for over 20 years, this compact framework, GPU or custom hardware independence, and resource efficient methodology is the correct approach. 😊
”a” correct approach but maybe not “the” correct approach. It’s not clear what downsides there are yet.
@@fkxfkx I'm not sure how you're supposed to run it? GGUF I can run but what the heck is the 14GB "llamafile" thing?
@@bigglyguy8429 actually, you don't need a 14GB llamafile. It's even unable to be run on windows (4GB max executable size limit). You can keep a llamafile without embedding any model in it and call it with the -m parameter to specify the model file to load.
NVIDIA has hired CIA agents to make sure this technology is not reaching in hands of public. Be safe sir !😝
Hats off for the engineering feat. But in terms of application, we are still just talking about text summarization. And the image generation in your own demo was just as disappointing as ever. There's no killer app for LLMs yet even though we keep throwing money and science at it. What are we even doing?
Am I the only one that someone AI generated Matt Perry?
Fuck yeah!!!
let's go to the gym
Free candy, I mean, Free open source Ai for everyone. It’s a like a trick. Don’t fall for it. Cease Ai.
Great work!