- 67
- 51 552
Jack Lin
Приєднався 2 вер 2018
I'm studying Computer Science at University of Southern California.
Giuseppe Ottaviani plays Rank1 - Airwave at Avalon Hollywood
Giuseppe Ottaviani plays Rank1 - Airwave at Avalon Hollywood
Переглядів: 267
Відео
Metrolink passing San Clemente
Переглядів 57Місяць тому
5:12 The conductor: Hi folks, great day for the beach. Right now it's December but we're living in... the wonderful Southern California here...
Union Station View after Dodger Parade
Переглядів 5722 місяці тому
Union Station View after Dodger Parade
2024 World Series Game 2 at Tom's Watch Bar
Переглядів 37 тис.3 місяці тому
2024 World Series Game 2 at Tom's Watch Bar
A walk from O'Connell Street to Trinity College Dublin. Part 2
Переглядів 107 місяців тому
A walk from O'Connell Street to Trinity College Dublin. Part 2
A walk from O'Connell Street to Trinity College Dublin. Part 1
Переглядів 177 місяців тому
A walk from O'Connell Street to Trinity College Dublin. Part 1
Try llama.cpp with alpaca-lora-30B-ggml
Переглядів 4,8 тис.Рік тому
Try llama.cpp with alpaca-lora-30B-ggml
How to run sl (steam locomotive) on windows
Переглядів 2125 років тому
How to run sl (steam locomotive) on windows
They liked it when the white guy liked it
From now on, whenever the Dodgers win, instead of "I love LA", they should play "It was a Good Day"
ICE CUBE for SUPERBOWL👏👍🎉
that’s why I tell my dad that the dodgers are the World Series title champions and he says maybe yes 100% or 1000000% to win it all. Dodgers fan and Dodgers team you earn it for what Yankees fans and fat joe dispectful to Dodgers team and Dodgers fans and hell yeah we win it all and takes it all.
We going to look back at these videos in 10-20 years if we still alive and reminisce the fuck out of these memories!!!! Today was a good day
Yes we will
True!!
I was there! Great game! Amazing atmosphere.
Lucky
Cube should get a WS ring with a mic 🎤 shaped of diamonds on it
If this scene does not signify unity, I don't know what does! Kudos to my fellow Angelenos! You are the best!
Guy at the bar doing took much
Considering verdugo danced to that but not fat Joe says all
“We didn’t even need to play the game we had already won”
❤❤❤
That bar was fucking dead af
It’s a restaurant not just a bar what do you expec 🤷♂️
LET'S GOOOOOOOOOO DODGERS ❤🩵
Indeed today was a good day LA 🫡
Piękny Klimat 😊
The West Coast Is The Best Coast!!
Fucking badass record bro! Cheers man lets go dodgers!!! 🍻🍻🍻
ドジャース優勝おめでとう
💪🏿💪🏿💪🏿💪🏿 yeslord
Fat joe who?
0:17
❤❤❤❤❤❤
LET'S GOOOOOOOOOO DODGERS ❤
🔥🔥🔥
Fire 💪⚾️🔥🔥🔥🔥🔥🔥🔥
urgg, can you explain this please! My head turns dizzy now
Your computer be slow.
Did you slow down this video ?
Nope. It's the original speed.
is there a particular reason why they transferred the model to c++ (newbie question) other than to make the model smaller
C++ allows the entire model to be loaded into regular RAM. This is helpful for those of us without beefy GPUs.
There's a 7B model, which takes up only 4GB memory. But I was not sure if 7B can work at that time because there was a breaking change in this project. So the authors not only run them using C++ but also make smaller models.
Because otherwise you will have to use a GPU which uses different RAM (VRAM) compared to system RAM. You can also get more RAM for less money than multiple GPUs. Most consumer GPUs have very little VRAM, on average 4-8GB, which isn't enough usually. Although GPU is much, much faster than CPU inference as you get the parallel compute with higher floating point precision for next token predictions.
Everyone is saying that it is because in this way you can load the model in regular ram, but if I'm not mistaken pytorch already has this feature and so you don't need to reimplement everything in cpp if you only care about where to load the model, instead i think the difference here is that you need to reimplement stuff if you want to use custom protocols or formats (like in this case with the ggml format) and control how they are managed at low level to have more efficiency so i guess that's the main reason
How quit from chat, is ask ai and he say Ctrl+t but not work, finally I close the window o prompt, but I think can be quit somehow?
Just press Ctrl+C for 2 or 3 times (in case the prompt didn't catch it), which is the termination signal in Linux.
how did you transform the model ? (.tmp ?) I get a too old, regenerate your model files or convert.. error when trying to use it...
I followed the comment github.com/ggerganov/llama.cpp/issues/382#issuecomment-1479091459 to transform the model.
But I notice there are some newer alpaca lora projects with more user-friendly setup like github.com/nomic-ai/gpt4all. Maybe you can try it.