Run your own AI (but private)
Вставка
- Опубліковано 4 тра 2024
- Run your own AI with VMware: ntck.co/vmware
Unlock the power of Private AI on your own device with NetworkChuck! Discover how to easily set up your own AI model, similar to ChatGPT, but entirely offline and private, right on your computer. Learn how this technology can revolutionize your job, enhance privacy, and even survive a zombie apocalypse. Plus, dive into the world of fine-tuning AI with VMware and Nvidia, making it possible to tailor AI to your specific needs. Whether you're a tech enthusiast or a professional looking to leverage AI in your work, this video is packed with insights and practical steps to harness the future of technology.
🧪🧪Take the quiz and win some ☕☕!: ntck.co/437quiz
🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
VIDEO STUFF
---------------------------------------------------
Ollama: ollama.com/
PrivateGPT: docs.privategpt.dev/overview/...
PrivateGPT on WSL2 with GPU: / installing-privategpt-...
**Sponsored by VMWare by Broadcom
SUPPORT NETWORKCHUCK
---------------------------------------------------
➡️NetworkChuck membership: ntck.co/Premium
☕☕ COFFEE and MERCH: ntck.co/coffee
Check out my new channel: ntck.co/ncclips
🆘🆘NEED HELP?? Join the Discord Server: / discord
STUDY WITH ME on Twitch: bit.ly/nc_twitch
READY TO LEARN??
---------------------------------------------------
-Learn Python: bit.ly/3rzZjzz
-Get your CCNA: bit.ly/nc-ccna
FOLLOW ME EVERYWHERE
---------------------------------------------------
Instagram: / networkchuck
Twitter: / networkchuck
Facebook: / networkchuck
Join the Discord server: bit.ly/nc-discord
AFFILIATES & REFERRALS
---------------------------------------------------
(GEAR I USE...STUFF I RECOMMEND)
My network gear: geni.us/L6wyIUj
Amazon Affiliate Store: www.amazon.com/shop/networkchuck
Buy a Raspberry Pi: geni.us/aBeqAL
Do you want to know how I draw on the screen?? Go to ntck.co/EpicPen and use code NetworkChuck to get 20% off!!
fast and reliable unifi in the cloud: hostifi.com/?via=chuck
- Setting up Private AI on your computer
- Offline AI models like ChatGPT
- Enhancing job performance with Private AI
- VMware and Nvidia AI solutions
- Fine-tuning AI models for specific needs
- Running AI without internet
- Privacy concerns with AI technologies
- Surviving a zombie apocalypse with AI
- VMware Private AI Foundation
- Nvidia AI enterprise tools
- Connecting knowledge bases to Private GPT
- Retrieval Augmented Generation (RAG) with AI
- Installing WSL for AI projects
- Running LLMs on personal devices
- VMware deep learning VMs
- Customizing AI with VMware and Nvidia
- Private GPT project setup
- Leveraging GPUs for AI processing
- Consulting databases with AI for accurate answers
- VMware's role in private AI development
- Intel and IBM partnerships with VMware for AI
- Running local private AI in companies
- NetworkChuck's guide to private AI
- Future of technology with private and fine-tuned AI
*00:00* - Introduction to Private AI and Setup Guide
*00:56* - VMware's Role in Private AI
*01:50* - Understanding AI Models and Exploring Hugging Face
*02:54* - Training and Power of AI Models
*04:24* - Installing Ollama for Local AI Models
*05:24* - Setting Up Windows Subsystem for Linux (WSL) for AI
*06:53* - Running Your First Local AI Model
*07:23* - Enhancing AI with GPUs for Faster Responses
*08:02* - Fun with AI: Zombie Apocalypse Survival Tips
*08:28* - Switching AI Models for Different Responses
*09:04* - Fine-Tuning AI with Your Own Data
*10:50* - VMware's Approach to Fine-Tuning AI Models
*12:53* - The Data Scientist's Workflow with VMware and NVIDIA
*15:23* - VMware's Partnerships for Diverse AI Solutions
*16:26* - Setting Up Your Own Private GPT with RAG
*18:08* - Bonus: Running Private GPT with Your Knowledge Base
*20:55* - The Future of Private AI and VMware's Solution
*21:28* - Quiz Announcement for Viewers
#vmware #privategpt #AI - Наука та технологія
Run your own AI with VMware: ntck.co/vmware
Unlock the power of Private AI on your own device with NetworkChuck! Discover how to easily set up your own AI model, similar to ChatGPT, but entirely offline and private, right on your computer. Learn how this technology can revolutionize your job, enhance privacy, and even survive a zombie apocalypse. Plus, dive into the world of fine-tuning AI with VMware and Nvidia, making it possible to tailor AI to your specific needs. Whether you're a tech enthusiast or a professional looking to leverage AI in your work, this video is packed with insights and practical steps to harness the future of technology.
🧪🧪Take the quiz and win some ☕☕!: ntck.co/437quiz
🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
**Sponsored by VMWare by Broadcom
ok
There is a Windows Beta and its working fine , great video.
Without the crazy far left leaning politically 'correct' restrictions and filters?
Looks like VMWare is trying to do some damage control after the bad press of the 10x price increase.
Dude this is mind blow... Can't be focus on what to say , my mind just boom evry second of this clip.
I wish one day I can try that on my PC. Wish still available.
Step 1, AI recruits Network Chuck to convince us to install it on all of our computers.
Holy shit, you might be joking, but this really feels like a logical step 1 for a rouge AI trying to replicate itself. I am starting to feel AGI really is just around the corner...
Irobot confirmed
Hahaha super ai botnet?
It is Skynet.
Fuckit, I may as well join the robots. Team humanity has been a disappointment. How much worse can AI be?
Hey @NetworkChuck I'm Emilien Lancelot, the guy behind the privateGPT tutorial on Medium. Wanted to say thx for the shoutout in your video - it truly made my day and I'm happy to have contributed to all the amazing opensource softwares related to AI that has emerged this year.
Great video btw. Keep up the excellent work in creating informative content. It's always a pleasure to watch ! ;-)
your guide is out of date, can you update it?
U must be pinned 😅
Is there a voice input similar to the one found on ChatGPT so we can talk to it?
@@AlexManMe Are you refering to Whisper ? It's opensource voice recognition.
I've already answered three times but youtube keeps deleting my comment for no reason... I know that the last privateGPT update broke a few things. I'll try and update the tutorial ASAP. Hang tight ^^. Not sure how much time this comment will stay up this time... lol.
Your videos are always so engaging and force me to want to go do what I see. Keep killing it!
I just recently got my dedicated AI machine. You just saved me a couple of hours of study time. Thanks!
I know Bob with 3 monitors is probably freaking tf out right now lmfaoo
you would freak out if you know that him saying that wasn't to bob, it was to you, to get you to say this predictable thing as a surreptitious way for him to gaslight you into channel engagement because people who are susceptible to manipulation and reverse psychology and anticipatory place setting will behave as expected
ITS MEEEEE
@@h4ckh3lphwat?
no wayy i thought i was the only one askdjnaskljdnaslkdjnasldknas
@h4ckh3lp Nah, he acted in good faith and didn't put all that thought, for such little ROI.
Absolutely hilarious that VMware sponsored this. Nobody should bother with VMware any more.
Why bro is that company fraud bro? I don't know anything about all this bro
Was looking for this comment, wtf
@@onlyforyou9999 It was recently purchased Broadcom. Their MO is to drop low income (those without megacorp $$) customers and squeeze the customers they do keep for as much money as possible. Most in the IT field also know Broadcom as the place where software goes to die.
@@onlyforyou9999They just killed their free ESXi hypervisor. Also, lots of businesses have been apparently jumping ship since they were acquired by Broadcom at the end of 2023.
@@onlyforyou9999bro I think bro that virtual box is better bro…bro
Thankyou so much for this because I was ready to quit my job because I'm being denied the ability to utilize these tools. You showed me that we can pivot and utilize these tools.
We can and should try to utilize private AI where possible.
broooo I've been trying to wrap my head around this. THANKY OU!
Love the vids. Been working at a computer store for over a year and just started watching your Linux for beginners videos. I’m learning a ton and you’re amazing at explaining it for people who are clueless like me
We were waiting for a video about AI from a guy such well organized as you. Thanks!
I am no tech guru. I'm slightly more proficient than an average person, but I was surprised that I got llama2 set up in about 2 minutes! I already love having this AI at the palm of my hand! Thanks for giving me a tool to make my life easier!
This video gave me everything I needed to complete some projects.
We had a very specific need for a chatbot to output a custom code based on LUA, for our custom LUA toolset for one of our new products.
Thanks Chuck!
It blows my mind how fast all this AI stuff is maturing. The gravity of knowing you can literally have an AI model for private use is astounding! Just incredible to see this stuff unfold. What a time to be alive.
Is it? Because i'm not impressed, not even a little. All it's doing is trying to match information you have given it and present it in like a human like way. It doesn't know shit and can;t check if the info is correct or not. It's all based on a most likely scenario. This will never work a 100% or atleast not with todays technology.
Yeah lets be careful with the excitement
Why @@NOBODY-oq1xr
On the one hand having a local AI that can answer questions I'd normally turn towards the increasingly-useless internet for is great for privacy and great for results. On the other hand, LLMs are the reason why the internet is now so useless. The signal-to-noise ratio has absolutely plummeted since SEO scammers have been plastering the net with LLM-written articles on every possible topic, all derived from the same source. Even the images and charts you find on websites are AI-generated now, and often filled with gibberish or bizarre anomalies. The danger now is also that future LLMs will be trained by crawling these very same websites. LLMs can't know about things that have happened recently, and LLMs can't be trained on recent things because older LLMs have polluted the body of human knowledge and buried anything new, and the sheer volume of these sites means new LLMs will be over-trained on that garbage and produce only garbage thereafter.
When it comes to creative output, putting aside the obvious copyright law issues in source data for training, the models are incapable of creativity. They simply produce what has already been done, again, in a probabilistic way. From my own testing on as many models as I can, the types of stories these LLMs are able to produce are very uncreative and very repetitive across multiple queries. As a creative professional, I know these models can't replace my artistic output on merit, but the models are so cheap to use, they'll be used anyway. This means we're headed for a cultural black hole of extremely boring and generic stories and art, with nothing but very slight variations on the same themes. They're cliche generators, nothing more, but they can create a massive amount of output very quickly.
The smartest use for these LLMs is in finding personalized recommendations for consuming existing pre-AI media, and in finding connections between various concepts and stories that aren't immediately obvious. Those are things that LLMs can do that they're actually good at, and provides benefit to the user. They might be able to inspire creativity in actual humans, by giving humans so much information at their fingertips that they quickly satisfy every curiosity and spend more time thinking about the information they have instead of searching for more information. Otherwise, we'd all be much better off without AI, and a human-generated internet where we learn from each other directly and have real human relationships (even if over copper and fiber optics). The internet was pretty great back when it was a bunch of niche forums with people talking directly to each other, becoming friends with strangers from all over the world, and getting very personalized interactions and enabling human collaboration in novel ways.
hey, i just got my ccna all thanks to your videos! i just wanna say thanks for everything
Hey ,what material did you use to study and how long did you study?
Did you just use youtube videos only or udemy courses?
Im studying for ccna right now. How hard was it?
Realistically, it should take 3 months or so. Jeremy's IT Lab is a great source of information and has everything needed to pass.
Unfortunately, it's not easy,
nice! thanks! @@maverickmace9100
@@maverickmace9100 Hi, can I know many many hours did you spend on average per day?
Got privateGPT working the other day, nice video as always Chuck.
I'm about half way through this video, and I have to say I think this is the best thing I've heard since I started using AI... that we can have our own private AI... I'm very sick of the moral suggestions and needing to word things differently to get answers to questions. I have a feeling this can help.... thank you !!
LM studio
but all the models I tried are low quality compared to gpt3.5, not to mention 4..
@@rolandcucicea6006 mixtral 8x22 just came out, beats gpt3.5 although you need a beefy computer for it
LM Studio is this legit?
Why do you bless us with such fun little projects to do all the time? I’m so thankful man thank you
running your own ai was offered a long time ago, he's insanely late to the game..
way way better guides out there for up to date models
@@rwshankI’m sure there are, but that’s why we like NC, he brings them to us.
@@rwshanksince when was it possible to create your own ai with no limitations as explained in the video since when did you learn that can you explain a little bit
No kidding. I don't need another project!
BTW, you don't need a VM for different AI apps, you can virtualize Python environments way more efficiently with conda or pyenv-virtualenv.
or a docker or ~~LXC~~ Incus container.
edit: Incus not LXC. F the recent LXD changes by canonical
You’re awesome ❤
why is docker more efficient than VMware, sorry if its obvious. also i see vscode i see tabnine, does that use local AI with global ai on your local code. @@itsTyrion
Where would you look for a developer to help set this up for a business?
Maybe. But you'll need to make a video about how to do it if you want to keep up.
Can't wait to get this going. Thanks big fan of the channel
Chuck, I love you bro!! This video was so amazing!
After facing difficulty to run PrivateGpt previously, this is the one video that I needed the most. Thank you so much chucky chuck . Hehe
Is this uncensored though? Cos i got rid of Gemini cos it was so limited and biased
@@dinom3106there are certain models that are and they do work
@@dinom3106 You can use the "uncensored" Llama, but probably Dolphin Mixtral is your friend here.
@@dinom3106Yes, it just told me how to make boom booms lmao
@@dinom3106 You can get uncensored llms with ollama (shown in this video) dolphin-mixtral works pretty good. I haven't been able to get privategpt to work yet tho so idk
FINALLY A NEW VIDEOO!!!! LEZGO always waiting for quality content from you!
Took me all day to figure it out but thank you your information gave me hope. Great content
Oh thanks I really needed something like this!
I like how within the first hour this is up there is a windows build of ollama on their site
LOL exactlyy
They don’t want people learning Linux 😂
i installed WSL and ubuntu then went to the ollama website and was like "bruh"
@@americanhuman1848 took me an hour to install Ubuntu in vbox
just in time, thanks for mentioning it! 😂
Update on my new mac pro, running 5 different ai trained. One is fully functional. Other 4 limitations. Thank you for this video. Will be writing code to have them run processing questions all together and fine tuning it
I've been needing an AI tool to practice with but I didn't want to worry about sharing my data - thanks for this! Had no trouble setting up Ubuntu and installing this. Just noticed today that they now have a public beta for the Windows version at Ollama's website too!
Wow VMware is in full desperation mode.
for real LOL
hahaha yea increasing prices by almost 3x lol
As someone who doesnt understand the AI space, is what hes suggested to do in the video bad, or can i follow it blindly and see how I get on.
Nah it's greed mode, the moment Broadcom acquired them they let go of a lot of people without notice.
@@CT-ue4kgSAME QUESTION
I need this for my IT department.
Man this stuff excites me as much as it does you. I have it on my laptop already and holy moly. Thanks man, because of you I have a website and I am who I am. But the world is not ready for me, not where I am sadly
Ok so i just stumbled into this video and it was f#$%in gooood. And then realize what the channel is about and subscribed asap. Hoping for more content! Great stuff
THIS IS WHAT IVE BEEN LOOKING FOR
I just got Ollama within the last month! What a coincidence that you make a video about it. It is so much fun to play with. I also got Ollama Web UI and it is great. You should do a video on it. :D
To say that this video was an eye opener for me would be an understatement.
WAIT!!!!!!!!!!!!!!!!!! VMware is your sponsor? You will need to address this Chuck. Didn't VMware just get bought out and they are killing their support for small to med size users as in the people most likely to be watching you vids. Most people are moving to a different hypervisor now. VERY STRANGE .
I will have to check this out, I chose proxmox a few years ago, because Debian.
Nvidia is not exactly the most ethical either.. Next sponsor will be RedHat lol
Imagine that chuck is a sell out 🙄
VMware made a promove.
They muddied the water.
Hypervisor sucks
Actually wait.
After careful considerations, VMware is doing exactly what we need.
When cars were new there were too few driving them Chrysler gave out interesting cars. They didn't sell the turbine well. But it got people talking about them.
Any new technology first needs attention. Then we need it to be scaled down to consumers.
But why would companies provide affordable options for individuals?
They won't.
Targeting businesses will cause a boom in the industry aimed at lower scales.
And we're finally the next step.
I'd love for affordable dedicated AI acceleration chips with memory to run it.
Maybe one day.
For now, this is a big step in the right direction.
VMware may be evil blah blah blah.
But they're not as treacherous as Blizzard. And even with that, I'd still accept any improvements to any game made before Immortal.
As long as I don't need to spend another cent on them.
It keeps boggling my mind how much knowledge you have about every aspect on networking and the passion of cybersecurity and sharing knowledge. Great work!
Yet he is telling people to run random scripts off the internet, not even mentioning that you should always read through things like that before doing 'curl someurl|bash'
He is doing the equivalent of telling people to always dig straight down in Minecraft
He also forgot to mention that the tools he is talking about might be available from the Linux distributions package repo, or (like on Arch) in a 3rd party repository
Can we use this to search online
@@eriklundstedt9469 well that is your interpretation. This is a channel that explains how certain software work and is mainly a cybersecurity teaching channel, or for people that starting their career or like to learn about software and cybersecurity. Chuck knows what he is doing, so it is general knowledge that you first do your research yourself before you try anything or install software online. And with this knowledge comes the basic assumption that you already have the knowledge to be able to do this in a safe environment.
And otherwise he has alot of video's to get more knowledge in this sector. So you should do your research first, before criticize someone who is a expert in this field. You might learn a thing or two ;)
i will ask AI if it's ok. it will be fine.@@eriklundstedt9469
"If you haven't smelled a server, I don't know what you're doin." 😆😆
:) :) :)
bro i love your videos. You are amazing!
If anyone wants to take this ollama to the next level on your home server like I did there is an ollama docker image available and an ollama-webui docker image. The webui lets you manage it all with no command line over the web or lan. You can download models with the webui, delete models, etc, its really nice.
Everybody is taking about low code or ai making coder unnecessary but this is where I think the industry is going. We will all be developing our own proprietary ai's that solve problems from the perspective of our company. Think about it, will you really give your private company's data to the open market? No!! It'll be on your own private servers where you can run your own ai. Great stuff Chuck!
I want to disagree with you so bad....
Just like how all companies keep their data in house and not on the cloud like Google or something
You are correct.
@@killerx4123Oh you sweet summer child...
@@killerx4123 I wouldnt say ALL, but many do. This was always the "Fear mongering" around AI, people forgot that corporate acceptance is naturally VERY SLOW, out of fear for the unknown. They didnt just "jump to the cloud" like they wont all just "jump to AI".
This is really cool and I appreciate you showing how to accomplish this.
I will say though, it's a little concerning to hand out tons of information for your "Private" AI model off to co-operative large corporations to train them. It doesn't necessarily matter if you're doing it for fun, but it's something you should keep in mind If you plan on doing this. What VMWare is offering is not exactly "private".
Yea.. this video talks about ollama and drifts into a sneaky ad for VMware and Ngreedia GPUs
I am not worried about privacy. The LLM you are changing is local, so treat that as confidential as your other files. The VMware involvement only means that they will charge two arms and three legs for it, and don't even bother talking to them unless you are a Fortune 500 company.
A video on local fine-tuning would be more appropriate for a private AI discussion.
these videos are a breath of fresh air. I love your work. such a natural lad.
He said "I don't know how many monitors you have." and I showed three fingers and said "Three" and then he said "You have three" and I instantly got surprised.
Hypothetically, link a morpheus-1 or similar neurological device to the ai monitor and have it able to align and understand and see the visual reps as well as a model of the brain. It could probably mirror the data to make a image or feed so the outside could watch like a camera but using the brains waves as data to map out the feed and compile it. Maybe use data from scanning the eye and understanding the layout connected to the data so its easier to align. It could potentially do sound as well so we could talk to each other without using our mouths which is not too crazy until you think of how we call each other on the phone or facetime. This could project the feed to a tv linked to the wifi same as the phone linking. The possibilities are pretty amazing honestly.
Just a highschool project.
Can you design and develop this system WITH OTS hw/sw now?
Yes
i really love how he shows proxmox in a vmware ad video haha
like a pro!
Thanks for that vid. I'm kinda rocking it right now on my Linux computer with my OWN LOCAL AI :D
Thank you very much !!!
I know You May See this But I Think You Ignited My Love for tech and introduced me into it you make learning it interesting and not boring thanks a lot for what you do and dont stop am rooting for you
ollama is available natively for windows now
4:50 Now it is "Windows (preview)"
Everything about AI moves fast 👀
Yea
being able to have it check databases in real time is critical to take it to the next level, imagine it being an interactive voice system that a customer calls and who wants to know about current pricing and availability of certain products, or if it were the NPC voice in a game that was aware of what the current game and objective state is.
Wow, i did not take your word when you said that this was schockingly easy to run, but you were so right it's so easy to install
I just checked the website and discovered that Ollama is available on windows as a preview. But thanks for the video!
He literally addresses this. Windows comes with Linuux subsystems. So you can literally follow this guide and have the same thing those of us on Linux and Mac have.
@@alphaobeisance3594 I know, i used it that way, but you can also try the windows preview installer if you want.
@@alphaobeisance3594did you not read the comment? The OP has said that “Ollama is available on windows”, so there is now no need to use the Linux version on WSL.
Ollama is now supported on windows!!!!
@NetworkChuck THIS IS WHAT I'VE BEEN WAITING FOR! THANK YOU!!!!
I heard from a video on openais channel that finetuning is mainly for restructuring requests that an ai already knows about/was trained on, reducing tokens/request. Instead when you want an LLM to learn about your data or data it was not trained on, that is when you turn to RAG
Awesome Chuck!! Thank you for such a beautiful knowledge
I’d be running wireshark to make sure it’s not reporting back to the Zuck even for a second.
Use a computer that doesn't have an internet connection. None of my computers have wifi in them. The one I'm typing on now is connected directly by cable to the router. Unplugging the cable guarantees it is disconnected, although I don't plan on using this computer. I have an old laptop that has no wireless at all..
ollama is safe. I can't speak to the safety of any other software he mentioned, but always use open source software and check the source if you are worried about internet connection leakage.
what if it saves it and sends later =)
Thank you so much!! I needed this an d didn't found anythind like that working!
Hello NetworkChuck,
I have so many questions. Thank you for sharing this video, it’s something that I want to do but don’t think my computer can handle it yet.
Here’s my questions 1. What formate is your journal notes in, pdf, html? 2. Does this AI control your computer also? 3. Does the AI has a speaking function?
Looks like Ollama is available on windows 10+ now
Ollama support windows now!
You are a Legend bro! 🙏
imagine having this in school as a student that works on pc i am verry thankful for this info
5:38 There's actually an error in the greeting the shell gives you, as it calls the Kernel "GNU/Linux", but the Kernel itself is just "Linux". Only the userspace (the shell, etc.) is from GNU, so you may call the OS "GNU/Linux" (I don't), but it's factually wrong to call the Kernel "GNU/Linux".
One thing to keep in mind with public faciing private AIs is that it is almost certainly vulnerable to attack by bad actors. Having private customer data accessible through and AI can be a dangerous game for things like PHI
Agreed. Also, I suppose rhat AIs in this stage are vulnerable to a whole other kind of attack, the social one. I mean, I can easily trick GPT-4 into writing a sqlinjection query for me and I'm not a social engineer nor an hacker, I suppose that real bad actors are a thousand times better than me
@@alemutasa6189there’s nothing wrong with learning penetrating testing I can find the same sql injection code on the first page of google
@@alemutasa6189 Thats exactly what I mean, yep. Standard attack vectors exist obviously as well, but the AI having access to information introduces a point of failure that you have no real control over.
@@markdatton1348 wouldn't you just train it on the public facing information about the products, not the customer data ?
@@Benthorpy That is another usecase, yes. But I am saying to be careful training an AI on any proprietary company data, or protected information in general. For example, an insurance agency wanting to use an AI to go between a customer and a their data. That usecase is inherently insecure, and should be considered carefully.
Bro! you the MAN! I had a powerful gaming machine with a dual RTX4090 Installed it in no time and is working GREAT!
it might help security as well!
The ai can scan the pattern of your device usage from programs, down to hardware level and might alert you if there's some weird actions inside that you aren't usually doing.
If you want to use AI privately, go all the way and use uncensored models that have had their biases stripped out of them (mostly, as there will still be some underlying bias in the training data). From there, you can fine tune it as you want. There are some good ones like dolphin-mixtral and wizard-vicuna-uncensored that will happily answer questions other models will try to shame you for asking or even outright refuse even though the models do know the answers. In some cases you may need to begin the session with some prompts that will force it to reject annoying moralizing. Depends on your queries whether this is necessary to get truly uncensored responses.
There is absolutely no reason to run the highly censored base models like the ones straight from Mistral, Meta, OpenAI, or Google, when there are de-censored versions to use instead. If you are going to make an AI be customer-facing or use it in some critical application where you want biases and censorship, fine-tune an uncensored model with your own particular biases and censorship needs that make sense for your own particular application. The big companies are running their own agendas and these may not be compatible with yours or your company's. This is pretty trivial to do, but always start with as raw a model as you can get as your base model for that, so you're not unwittingly letting in biases and censorship you don't intend for your final app. I understand the PR and political reasons why companies aren't willing to put out uncensored models themselves, but it does make those models really bad platforms to build off of without considerable retraining by the open source community afterwards.
By the way, a few days ago Elon Musk released xAI's Grok-1 base model weights and architecture with 314 billion parameters, under the Apache 2.0 license. It's a pre-training checkpoint, people will need to train it to be useful as a chat bot or anything like that. But people are already at work on it to make it into models we can run on consumer hardware, like these other models talked about here. If the underlying data is at least as good as the Mixtral model, this will be a very big deal because of the open weights and very permissive license. Hopefully other companies will eventually be forced to follow suit with their own models. There's no future in closed source AI, only a lot of venture capital being squandered by grifters. With any luck, a lot of tools will be built around the raw Grok-1 files and others like it and allow much better training and fine-tuning than the other more closed models require to get right. This will lead to more trustworthy and open base models to build off of and make use of privately or in public-facing ways.
Pretty trivial to do huh
I always wanted to trade crypto for a long time but the volatility in the price has been very confusing to me although I have watched many UA-cam videos about it but still find it difficult to understand.
It makes sense that BTC and crypto are of helping to regulate, rather than pretend it won't ever happen.
The big institutions getting In is the catalyst that will laugh us made they grow used to it becomes a nonissue usually because of their fears never materialize.
And benefits they were unaware of before turn out to be more beneficial
It makes sense that BTC and crypto are of helping to regulate, rather than pretend it won't ever happen.
The big institutions getting In is the catalyst that will laugh us made they grow used to it becomes a nonissue usually because of their fears never materialize.
And benefits they were unaware of before turn out to be more beneficial
Trading is easy, but trading the right coin without a time-tested strategy is incredibly hard.
My advice, never do shorts or longs on stocks or #crypto
most people went bankrupt better buying in parts monthly
I've been in touch with a financial advisor ever since I got into the market. Knowing today's culture The challenge is knowing when to purchase or sell when investing in trending Crypto, which is pretty simple. On my portfolio, which has grown over overboard and been making about $1 500 bi-weekly, What I'm trying to say is the 5% of traders in the world that are consistently profitable are very reserved, they are just random/lowkey people no one even expects. My account manager chooses entry and exit orders
W H O A!!!
When you said that Ollama ran well on an apple silicon mac, you weren't lying! And using Ai offline and FOR FREE right from the terminal is mind-blowing
One of the best videos so far!
Always crazy new stuff! Love it. I need my own AI-chuck
Hi Chuck! I followed his guide and ran into the same issue both times. All is fine and dandy until I get to "poetry install -- with ui" and I get this back: "Groups not found: ui (via --with). Same goes for "--with local". Stackoverflow doesn't have much of an answer. Any input?
Maybe it had an anxiety attack/Kernel panic at the thought of encountering Vogon poetry in the original LLM data. (The Hitchhiker's Guide To The Galaxy fans will understand.)
needs to be higher.... well sorta, since it's not really his walkthrough, but yes experiencing the same issue.
@@atlflips same issue on Stackoverflow. No one got it to work there either
@@nkjoself2040 yeah just went down a github rabbit hole. Just can't seem to get it. Even using chatgpt with the errors I still can't resolve it.
You need to use --extras ui instead of --with ui . However --extras local doesn't work so im still stuck.
For people who have no education or background into this area, this video was a nice intro and toned down explanation for average person like myself.
I already feel like a data scientist. For soup recipes .
as always - mega great video! I am starting to get addicting!
0:50 Whoa...you haven't heard what VMware is doing to their customers? Seriously?
Can you tell me?,like i genuinely don't know.
@@devilsgaming9796Broadcom bought VMWare and is killing off ESXi for home users (free version), and screwing over VMware partners they don't think are big enough, killed vSphere (I think) and a bunch of other things that are going to mean that a lot of people who were using VMware are now utterly screwed, needing to migrate to another platforms.
This sponsorship is likely part of their damage control.
@@devilsgaming9796VMware was recently bought by Broadcom. Broadcom is making them raise prices and customers are pissed. I heard some people are switching to alternatives like Nutanix.
@@devilsgaming9796They are kicking and removing all small too medium size users. Which is majority of there users, all for bigger companies. (They got bought out)
@@devilsgaming9796VMWare was bought by Broadcom, a company notorious for buying tech companies and milking them dead. As a result, VMWare removed the free version of ESXi and replaced the perpetual licenses with waaaaay more expensive subscriptions.
As an example, my workplace is in the process of replacing our DC and we considered migrating to VMWare. Turns out the VMWare licenses alone would cost more than the actual hardware, so we went for something else.
Make a video how to use it with Python
Wow!!!!
Now I am super super excited interested!
Thank you for showing us!
I got a new hobby now! 😅
At (just) under 5 minutes in, on Windows, I already love this!
How much VMware have paid you for yhis ad?!??
Not your business bucko
🤔🧐
May be 10k inr 😅😅😅
More than they paid you, if you were an expert pentest ninja you would be getting paid too
@@AndrewElston lol forgot to ask the guy who just created an account today pff 😂
So we can do the black hat stuff too. Nobody will notice💀💀
itll basically give so much wrong info
Use an uncensored model like dolphin-mixtral.
Mate, your videos are the best!!!!!!
Thank you I love your videos and how you explain things. 🎉🎉🎉
Uhm, run private locally to connect then to big tec machines to train? Sry, what are you telling us?
when i ran this it said "no NVIIDA GPU detected" but I have 2 of them, how do i get it to recognize them?
What model are they?
I've grown used to big UA-cam channels advertising in the content itself. But, this was nuts. It seemed like a third or more of this video was an ad for VMware.
OMG I just got llama2 working with my RTX 3090 ti. I am so excited. Thank you Network Chuck!
Can the ai model write code???
yes
I'll be honest with you you lost me at Facebook
Same 😳 "If it's free, you're the product"
@@tsol438 you mean they are the product for this model
Lost me at Broadcom. Anyone know of anything similar for proxmox?
Chuck: "this is hard to do"
Chuck: completes it in 4 minutes
I decided to run in my Kali terminal that I have on Hyper-V...Works great!
9 minutes straight ad
You ain't lying. But a good ad
Thanks Chuck. Got me interested in network security. Still plugging away as much as I can.
With computing power being what it is, the future is everyone having a secure Data Bubble, which connects your phone, home and car. It will connect to the Cloud when it needs to.
Also, AI will become task specific for each area which needs it. Much like R2 and C3PO. Each has their own tasks. Your home vs your car AI.
I always wanted my own AI
Great time for a VMware ad 😛
seriously!! you are a legend....
it is a really cool thing to be actually able to use it for free and i as a programmer find it really usefull
Cool 😎 video. Fuck VMware though