GPT4ALL - The Free A.I. Chatbot For Windows, Mac and Linux
Вставка
- Опубліковано 6 чер 2024
- GPT4ALL is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. In this video, I'm using it with Meta's Llama3 model and...it works quite nicely!
REFERENCED:
► gpt4all.io/index.html
WANT TO SUPPORT THE CHANNEL?
💰 Patreon: / distrotube
💳 Paypal: ua-cam.com/users/redirect?even...
🛍️ Amazon: amzn.to/2RotFFi
👕 Teespring: teespring.com/stores/distrotube
DT ON THE WEB:
🕸️ Website: distrotube.com/
📁 GitLab: gitlab.com/dwt1
🗨️ Mastodon: fosstodon.org/@distrotube
👫 Reddit: / distrotube
📽️ LBRY/Odysee: odysee.com/@DistroTube:2
FREE AND OPEN SOURCE SOFTWARE THAT I USE:
🌐 Brave Browser - brave.com/dis872
📽️ Open Broadcaster Software: obsproject.com/
🎬 Kdenlive: kdenlive.org
🎨 GIMP: www.gimp.org/
💻 VirtualBox: www.virtualbox.org/
🗒️ Doom Emacs: github.com/hlissner/doom-emacs
Your support is very much appreciated. Thanks, guys! - Наука та технологія
I like ollama + self hosted lobechat(UI interface for chatting with bots and can integrates with ollama) since ollama can leverage the GPU which leads to way faster token generation
GPT4ALL can use the GPU as well through the Nomic Vulkan backend, and since it uses Vulkan for inference it can use any GPU with enough vRAM supported by Vulkan
OOOh , This is so NICE of you Man.
Ill give it a try. I just cant get LLM Studio to work on my Linux PC. Thank you for the walk through. Love your channel Bro!
People outright rejecting local AI sounds a lot like: old man yelling at cloud 😂
This is really neat. I downloaded the Nous Hermes 2 Mistral DPO model. It's remarkably slow on my little quad core machine but still fun.
"No GPU required" well this is why you need it 😅
Ah yes, yet another toy. I definitely agree regarding the alternative dark theme. The one you used in the video looked washed out and faded, but the other one was clean and clear, very crisp.
Thanks, man!
This looks fantastic. Thanks a lot for putting a spotlight on this project.
Very interesting this GPT4ALL is
damn that haiku was actually really good
Ollama, LM Studio, Oterm for a CLI client, and OpenUI if you want a fancy web one.
I like the appimage, when it works. When it does you can add what you want but not mandatory, and doesn't require a sign in,,, although it is there if your into that kind of thing. The appimage runs just fine even on a computer I have from 2004 dual core 3GB of RAM max. This type of thing runs great on the pi but the appimage is only amd64
That is what I want. Not something screenshot me 5 seconds each time.
Ya as some others have mentioned, ollama is what I discovered, it's a neet gimmick at the moment tho. I did use it to make a neet backup file script! 😊
Hey DT: Thank you for this great video. AI is looking like the next big thing. I am seeing actual job descriptions for training AI bots. Thanks for this information!
*reminds me of LUNA/Coin gaming hype and ChrisTitusTech saying 'chat-Gptitty'*
Would also love to see KoboldCpp on your channel DT, which is also open source and linux native.
Still not using it I don't trust Skynet.
Kobolds are not to be trusted
Looks great, can't wait to teach it to replace me.
I installed it on my MX KDE and gave it a couple of test runs DT, it works pretty well. I look forward to exploring it a lot more. Great video my friend!
Thanks for the suggestion DT, as if I don't spend enough time with these things already LOL. Looking forward to having a local one that isn't sharing data like all the others & of course that is open source. Appreciate it. Already installed from the AUR while this video was playing!
Can confirm that the installer works on Fedora!
I do prefer the Jan Ai or Open LLM Web UI ux better but since the Linux users love the CLI you should probably cover Ollama directly then, which does the same but in the terminal.
I did this with Google Gemini: Write me a haiku using the words "A.I."
Gemini says:
Silicon dreams bloom,
A.I. whispers to the wind,
A future unknown.
2nd time again
Code breathes, then takes flight,
A.I. seeks its own true north,
A world unseen yet.
Thanks DT for letting us know about gpt4all.
Big brother for all!
Ollama performs much better. That to in terminal.
Has any tried out the API for gpt4all? Have yet to use it.
Liking and commenting!
dtoptions leaking with that disclaimer 🥵
Don't do it OpenAI is becoming Skynet in real life we were warned about this in Terminator!!!!
hi DT I am not able to configure gpt4all on my arco linux can help please!
Ollama + Open Web UI would be a better option
It missed out aardwolf!
We are now in the third AI Hype period.
We had the first AI Hype in the late 1950s and during the 1960s, and then the AI Winter of the 1970s.
The second AI Hype was in the 1980s, with the second AI Winter between roughly (depending on which events one chooses to mark the beginning and end) 1990 and 2010.
I think you forgot and left the financial disclaimer ON from the other channel lol
The key question,is it uncensored.
Nope. But Ollama is
If you use an uncensored model, yes. All these AI backends accept other models, just load models that work for your needs, some are better for coding, some are better for creative writing, some are made for function calling, some have been more extensively uncensored... Etc.
How can I host it
😲🤯🎉 excelente ,100000 tnks
It's a wrapper for ChatGPT?
No... chatgpt is just one model....this can be used with other models as well
I only have a ryzen 5000 and 16gigs ram, I think my laptop would catch fire if I attempt this
Sadly compared to more commercial offerings, the bots are seemingly pretty limited. I mean they don't even remember a conversation wholly, just a few sentences back and forth and they have already forgotten crucial parts of it. You can increase the memory to no avail, these bots just don't remember much short term. They've got the worst case of Alzheimers long term, too, since they also don't store any information in long-term memory, even though locally that should be possible somehow. It's like they're frozen in time and you always wake up the same clone. When can we finally tell them something and they will actually remember that, a week and thousands of other conversations, later?
While dt was rolling through that settings menu I saw a checkbox for 'save chats context to disk,' presumably that would have some effect on the bot's memory.
Things like KV cache quantization and --flashattention (using examples from KoboldCpp since that's what I'm mostly used to) can allow you to run 64K tokens context locally with midrange hardware and that's a lot of tokens, lol.
Mistral 7B uncensored is my goto selfhost
AI is still in development at this time, but I feel it will get much better as soon as 1 year from now.
@@gildedlink No, that just means you can re-read it. The AI will consider what's written in it, to a point, but it's like 1000 miles diameter, one inch deep. I can tell an AI "I name you stephen", close the conversation, start a new one, and ask it what it's name is, and it'll either come up with a random one or tell me that as a language driven model, I DON'T NEED NO NAME, DUH. ;3
Just like it doesn't even know the system date and time. Why aren't these AI at the very least given the current date and time?
Maybe there's a minority in the channel that uses macOS, but if anyone is running macOS, I don't think this is something that should be recommended to install, at least version 2.8.0. I'm experiencing bug #2400 and it's a bit unsettling to see how the app does nothing, can't quit normally, the window title doesn't even match the app name (it reads 'chat' instead of gpt4all). I'm still not sure if I just installed a malware and I'm sure I tested the dmg available on the official site AND the github repo.
Ollama with llama3, phi3 or gemma is interesting for test my laptop 🤣
is it better than LMStudio?
LM Studio being closed source isn't very appealing, a good alternative to it is Jan AI.
I try to avoid Ai when ever I can, definitely not on my Linux Desktop.
You sure don't want to follow the directions for cooking macaroni. It tells you to boil the macaroni in gasoline, what a wonder thing A.I is. Its down right dangerous.
No it's not. And that's just one recent weird thing that Google did.
@@Singularity606 Tell me about it. Google does to many weird things
You did not use GPU to accelerate the responses
Which is great because not everyone has a GPU or is willing to buy one just to fiddle with an AI chatbot.
@@luciengrondin5802 Overpriced and proprietary NVIDIA products are banned from my house - as well as the crappy "games as a service" that rely on them.
The GPU inference speed increase is a 10x over CPU only. Really matters a lot.
@@luciengrondin5802 then be happy with a slower chatbot
Kaspersky warned me before clicking this video. Why?
The UI is janky at best
ollama + alpaca = much better than anything!
what does alpaca do?
AI is overhyped, its should be good in theory, but sucks now
Pog
It's... Let's call it lacking when it comes to Swedish ;-) Less people, less results.
Yeah ChatGPT is more fun than laser pointers with a room full of cats :)
running on cuda on my gtx1080 gave me a speed of 45
Sorry AI is not getting on my Linux bite me OpenAI thanks!!!!!
This isn't by OpenAI.
I use macOS, that’s what Linus Torvalds uses.
He is running Debian on a Macbook fool.
Asahi Linux on MacBook Air - current flagship distro is Fedora Asahi Remix, which is a collaboration between Asahi Linux and the Fedora Project
If AI bots are going to be centre of desktop/laptop computers, then windows and Microsoft is going to be super dominated, even Apple is not going to reach that level
Apple will announce their copilot counterpart on Monday
@@mentalmarvin let's see
@@mentalmarvin apple will probably be like some privacy level thing which idk how well will they implement but Microsoft is very open they don't give a crap about privacy they can introduce anything that can collect data and Improve service
POE AI
all my scripts have AI in it and i'm almost not ashamed of it
I write my scripts manually - because that's how you learn stuff and keep your brain active. You might as well just say "I am not ashamed of being lazy".
@@terrydaktyllus1320 i am not ashamed of being lazy, problem? you may aswell go to the library every time you have to look up a word in the dictionary
@@MacroAcc I am simply making an observation, it's still your problem to deal with. I don't remember the last time I needed to look up a word in the dictionary. I have an extremely large vocabulary based on being intelligent, having had a good education and always having been encouraged to think for myself - and that's because I like learning stuff and thinking for myself.
There you go, we've gone "full circle".
@@terrydaktyllus1320 you good mate
Ollama better
Is it woke like all the other AIs though?
You can use any model you want. Use a based uncensored model like me and ask *anything* you want.
@@Lewdiculous Any recommendations for a based LLM?
If that new world is anything like what I have experience dwith AI so far, we will all starve. AI is a shit show.
AI is a privacy and security nightmare and people who keep pushing it are contributing to the problem. You can't say you're for privacy with videos like this gimme a break 🤦♂
How is this hurting your privacy? It's an interface for a local set of weights.
Nah, no thanks. Appreciate your bringing this to my attention but I don't trust any of this chatbot BS and don't want it on my Linux system.
It's 100% local and you chose everything...
The models are open source on hugging face, so there's no privacy issues here. Of course this just adds bloat on your pc imo, especially as I still think most open source models still hallucinate things / have limited knowledge.
@@opposite342 Fair enough. If I was going to install something like this, I'd prefer open source for sure.
Why u hate brazilian folk? U hate biglinux, lxlinux, whubuntu, Bolsonaro, NeymarJr
quem não odeia bolsonaro e neymar, boa gente não é :)