Building My Ultimate Machine Learning Rig from Scratch! | 2024 ML Software Setup Guide

Поділитися
Вставка
  • Опубліковано 31 січ 2025

КОМЕНТАРІ • 264

  • @Hamisaah
    @Hamisaah 9 місяців тому +46

    You put so much effort and knowledge into this video! I watched all the way and it was interesting how you demonstrated the whole build from scratch. Keep it up!

    • @sourishk07
      @sourishk07  9 місяців тому +4

      Thank you so much for watching! Excited to make more ML videos 🙏

    • @paelnever
      @paelnever 9 місяців тому

      @@sourishk07 Better use llama.cpp instead of ollama, faster and more options including model switching or running multiple models simultaneously.

    • @sourishk07
      @sourishk07  9 місяців тому +1

      Thanks for the recommendation! I'll definitely take a look.
      I like Ollama because of how simple it is to get up and running and that's why I chose to showcase it in the video.

    • @paelnever
      @paelnever 9 місяців тому

      @@sourishk07 You don't seem the kind of people who likes "simple" things. Anyway if you want to run llama.cpp in a simple way also you can do it.

    • @sourishk07
      @sourishk07  8 місяців тому

      @paelnever I just played around with it and it seems really promising! Definitely want to spend more time looking into it. I appreciate the rec

  • @HoangMinhTran-v9d
    @HoangMinhTran-v9d 2 місяці тому +5

    bro I really admire you for the way you put in the effort to guide and explain the processes you carry out in such detail. This should really be a tutorial video for AI enthusiasts who don't know how to set up the environment or support tools. Once again, thank you for your truly useful sharing!!

  • @mitulbhatnagar983
    @mitulbhatnagar983 16 днів тому

    Amazing video. I was here to see the hardware you used but I got so much more. Great work.

  • @dominiclovric8984
    @dominiclovric8984 6 місяців тому +3

    This is the best video I've seen on this topic! Well done!

    • @sourishk07
      @sourishk07  5 місяців тому

      Thank you so much!!!

  • @hasannkursunn
    @hasannkursunn 6 місяців тому +2

    Resources that you shared are amazing👍 I always see videos teaches you how to build the system but your video includes much more than that👌 Thank you very much!

    • @sourishk07
      @sourishk07  6 місяців тому

      Thank you for watching! I'm glad that you enjoyed it!

  • @porterneon
    @porterneon 3 місяці тому +17

    for the price od 4080 super you can get 2x 4060Ti with 16GM VRAM each. Then you could pararell some work by loading different models to each GPU.

    • @collinslagat3458
      @collinslagat3458 Місяць тому +1

      The electricity bill will be massive

    • @porterneon
      @porterneon Місяць тому

      @@collinslagat3458 you can always set power limit.

    • @rupertsmith6097
      @rupertsmith6097 4 дні тому +1

      Shame there is no nvlink on 40xx cards - or else you would have a 32GB GPU with that setup.

    • @porterneon
      @porterneon 3 дні тому +1

      @@rupertsmith6097 I have 2x 3090 and even without nvlink I'm able to load model bigger than 24GB. Model will be spliced between those two cards. Load time is longer but it works.

    • @woodnotemusix
      @woodnotemusix 2 дні тому

      @@porterneonI’m looking at a 2x3090 setup as well, whats the biggest model that you were able to run comfortably? ~15+ tokens/sec

  • @Chak29
    @Chak29 8 місяців тому +1

    I echo other comments; this is such a great video and you can see the effort put in, and you present your knowledge really well. Keep it up :)

  • @gbgungnir
    @gbgungnir 3 місяці тому

    Thank you Sourish! All the knowledge you are sharing here is just inavaluable. It was a really inspiring content for me. Again, thank you!

  • @sergeziehi4816
    @sergeziehi4816 9 місяців тому +5

    This, is days of work!!! Compile freely in 1 single video. Thanks!! For that.pricesless information here

    • @cephas2009
      @cephas2009 9 місяців тому

      Relax it's 2hrs hard work max.

    • @sourishk07
      @sourishk07  9 місяців тому

      Don't worry, I had a lot of fun making this video! Thanks for watching and I hope you're able to set up your own ML server too!

    • @JorgeDizDias
      @JorgeDizDias 6 місяців тому

      @Sourish Kundu Indeed it is very nice this video and very informative. Would you share the command prompts in on file?

  • @Eric_McBrearty
    @Eric_McBrearty 9 місяців тому +1

    This was a great video. I had to pause it like 10 times to make bookmarks to all of the resources you dove into. The I saved it to Reader, and summarized it with ShortForm. Great stuff. You went into just enough detail to cover the whole project and still keep the video moving along.

    • @sourishk07
      @sourishk07  9 місяців тому +1

      That was a balance I was trying really hard to navigate, so I'm glad the video was useful for you! Hope you have as much fun setting up the software as I did!

    • @crypto_que
      @crypto_que 7 місяців тому

      I had to slow the video to .75 speed to make sure I was understanding what he was saying.

  • @fou2flo
    @fou2flo 3 місяці тому

    Just insane... never new you could do that.. life changer. Thank you so much!

  • @thethinker6837
    @thethinker6837 6 місяців тому +3

    Amazing project !! One step away from creating a personalized Jarvis hope you create one 👍

    • @sourishk07
      @sourishk07  6 місяців тому

      Haha maybe that's my long-term plan!

  • @TimothiousAI
    @TimothiousAI 2 місяці тому

    Incredible video. Very well articulated and didn’t miss a step. Thank you!!

  • @aravjain
    @aravjain 8 місяців тому +3

    This feels like a step by step tutorial, great job! I’m building my RTX 4070 Ti machine learning PC soon, can’t wait!

    • @sourishk07
      @sourishk07  7 місяців тому +2

      Good luck and I hope you have fun! I love building computers so much haha

    • @aravjain
      @aravjain 7 місяців тому

      @@sourishk07 me too!

  • @halchen1439
    @halchen1439 8 місяців тому

    This is so cool, im definitely gonna try this when I get my hands on some extra hardware. Amazing video. I can also imagine this must be pretty awesome if youre some sort of scientist/student at a university that needs some number crunching machine since youre not limited to being at your place or some pc lab.

    • @sourishk07
      @sourishk07  8 місяців тому +1

      Yes, I think it’s a fun project for everyone to try out! I learned a lot of about hardware and the different softwares

  • @shasum4226
    @shasum4226 2 місяці тому

    This was one epic video. Thank you for sharing all this priceless knowledge.

  • @DailyProg
    @DailyProg 9 місяців тому +1

    I found your channel today and binged all the content. Please please please keep this up

    • @sourishk07
      @sourishk07  9 місяців тому +1

      Wow I'm glad you found my channel this valuable! Don't worry, I have many more videos coming up! Stay tuned :)

  • @naeemulhoque1777
    @naeemulhoque1777 5 місяців тому

    Bro Made one OP video! 🔥🔥🔥🔥😃
    Please make more LLM focused video.
    1. More PC building guide for LLMs.
    2. Difference between quantized models.

  • @jefferyosei101
    @jefferyosei101 9 місяців тому

    This is such a good video. Thank you, can't wait to see your channel grow so big, you're awesome and oh we share the same process of doing things 😅

    • @sourishk07
      @sourishk07  9 місяців тому

      I really appreciate those kind words! Tell me more about how our processes overlap!

  • @BirdsPawsandClaws
    @BirdsPawsandClaws 27 днів тому

    Nice design and choice of components!

  • @init_yeah
    @init_yeah 4 місяці тому +1

    nice vid man im still saving up for my setup, got the 4080 super already the rest will be easy!

  • @deltax7159
    @deltax7159 9 місяців тому +4

    Cant wait to build my first ML machine

    • @sourishk07
      @sourishk07  9 місяців тому +2

      Good luck! I'm really excited for you!

  • @benoitheroux6839
    @benoitheroux6839 9 місяців тому +1

    Nice video, well done ! this is promising content! Can't wait to see you try some Devin like stuff or test other way to use LLMs.

    • @sourishk07
      @sourishk07  9 місяців тому

      Thank you so much for watching! It'll be really cool to be able to run more advanced LLMs as they continue to grow in capabilities! Excited to share my future videos

  • @Zelleer
    @Zelleer 9 місяців тому

    Cool vid! Not sure about pulling hot air from inside the case, through the rad, to cool the CPU though. But really a great video for anyone interested in setting up their own AI server!

    • @sourishk07
      @sourishk07  9 місяців тому +1

      Hi! That’s a good point, but from my testing, the max difference in temperature is only about 5 degrees Celsius. Also, keeping the GPU cooler is more important.
      And because the only place in the case for the rad is at the top, I don’t want to have it be intake, because heat rises and the fans would suck in dust.
      Thanks for watching though and I really appreciate you sharing your thoughts! Let me know if there were any other concerns that you had. Always open to feedback 🙏

  • @pradeepr2044
    @pradeepr2044 6 місяців тому +1

    Absolutely loved the video. Learnt a lot. Thank you...

    • @sourishk07
      @sourishk07  6 місяців тому

      You're welcome! I'm glad you learned something!

  • @mabillama
    @mabillama 3 місяці тому +1

    Exceptional content!

  • @akashtriz
    @akashtriz 9 місяців тому +5

    Hi @sourishk07,
    I had considered the same config as yourself but then changed my mind due to:
    1. the unstable 14900K performance due to MoBo feeding the i9 insanely high power. Please Do make sure you enforce intels thermal limitations on the Asus MoBo bios settings. 😊
    2. Instead of the NR200P I opted for AP201 case so that a 360mmAIO can be used for the CPU.
    3. I went for a used 3090 as much of my focus will be on using the A100 or AMD Mi300x on the cloud.
    ROCm has made huge progress, noteworthy is the efforts that George Hotz is taking to make ROCm more understandable for the ML community.
    Overall congratulations buddy, hope you succeed at your goals.

    • @sourishk07
      @sourishk07  9 місяців тому +1

      Hi! Thanks for watching the video and sharing your setup. You bring up completely valid points.
      1. I personally haven't had any issues with the 14900K stability. I didn't turn on AI overclocking in the BIOS and just left settings at stock (except XMP for RAM). I'm probably more wary with any sort of overclocking after that news came out now though lol
      2. The reason I opted for the smaller case was because I wanted to try building in a SFF for the first time. The good thing is that cooling hasn't really been impaired, although a larger radiator never hurt
      3. I should've considered a used 3090 as well, but because I wanted to do some computer graphics work as well, I opted for the newer architecture.
      And while the advancements in ROCm do seem promising, I'm not sure anything will ever take me away from NVIDIA's vast software suite for ML/AI, but maybe one day, we'll see!

    • @jwstolk
      @jwstolk 5 місяців тому

      @@sourishk07 The issue is that the BIOS defaults not result in instability, they can permanently damage the CPU. This may eventually be fixed with OS updates that try to update the BIOS, but Intel has been quite slow admitting the long know issue and providing proper fixes. It may be worth looking into this a bit more, before assuming the BIOS defaults are safe, since the issue is specifically about incorrect BIOS defaults.

  • @raze0ver
    @raze0ver 9 місяців тому

    am just gonna build a budgeter PC than yours for ML this weekend with 5900x + 4060ti 16GB ( not a good card but enough VRAM .. ) will go through your video and follow the steps to setup everything hopefully all go as smooth as you did ! Thanks dude!

    • @sourishk07
      @sourishk07  9 місяців тому +1

      Thanks for watching and good luck with your build! I think for my next server build I want to use GPUs with more VRAM, but 16 GB should serve you fine for a budget build

    • @raze0ver
      @raze0ver 9 місяців тому

      @@sourishk07 do you think those pro card such as A4000 or higher is really necessary for casual ML given their price tags?

    • @sourishk07
      @sourishk07  9 місяців тому +1

      @@raze0ver No, probably not. Since those cards are originally targeted at enterprise, they're overpriced. What I should've done is gone for a used 3090 because that's the best bang for your buck when it comes to VRAM or a 4090 if you can afford it.

  • @jordachekirsten9803
    @jordachekirsten9803 8 місяців тому

    Great clear nd thorough content. I look forwrard to seeing more! 🤓

  • @JsAnimation24
    @JsAnimation24 9 місяців тому +4

    Thanks for this! I see you went with 96 GB system RAM and a 4080 with 16 GB VRAM. Curious whether the 16 vs 24 GB VRAM (e.g. in 4090) could make a difference for AI/ML, and especially LLM, apps? I realize a 4090 would have set you back another extra $1000 though. And is more system RAM helpful, what I'm reading is that GPU VRAM is more important.

    • @sourishk07
      @sourishk07  9 місяців тому +5

      Thanks for the question! Yes, VRAM is king when it comes to ML/AI. Always prioritize VRAM. More system memory will never hurt, especially with massive datasets, but I didn't want to elect for the 4090 because of its price tag. However, on FB marketplace, I've seen RTX 3090's with 24 GB of VRAM for as low as $500, which was an option I should've considered while I was choosing my parts.

    • @federicobartolozzi680
      @federicobartolozzi680 9 місяців тому

      immagine two of them with nvlink and the cracked version of P2P.​ Too bad you didn't see it earlier, it would have been a great combo.😢 @@sourishk07

    • @xxxNERIxxx1994
      @xxxNERIxxx1994 9 місяців тому

      @@sourishk07 RTX 3090's is a MONSTER ! fp 16 models loaded with 32k context running at 60 tokens are the future :D
      Great video :)

    • @sourishk07
      @sourishk07  8 місяців тому

      @federicobartolozzi680 @xxxNERIxxx1994 Stay tuned for a surprise upcoming video!

    • @martin777xyz
      @martin777xyz 4 місяці тому

      I've seen build videos with 4x rtx3090. VRAM is king

  • @JulianLugo-u1k
    @JulianLugo-u1k 9 днів тому

    This video is amazing, whtin a few days I'll be building an AI setup, with Epyc 7643 256 ECC RAM and 4 P40 maybe 2 3070 Nvidia cards, I'll try to record everything and share it

  • @JakubSK
    @JakubSK 4 місяці тому

    Just built a couple of these. I like it.

  • @andrewjenery1783
    @andrewjenery1783 2 місяці тому

    That's 48GB per memory module! What a spec and what a system! Which apps enable the actual machine learning?

  • @ashj1165
    @ashj1165 7 місяців тому

    very comprehensive video, thanks a ton!!!

    • @sourishk07
      @sourishk07  7 місяців тому

      You're very welcome!

  • @KushwanthK
    @KushwanthK Місяць тому

    Is it possible can we get the setup process with slow version and more details on how you connected in details? Sorry I’m not a hardware guy really never setup my own pc😢 but I love to do this time for my ml projects. Thanks 🙏🏾 for the video

  • @benhurwitz1617
    @benhurwitz1617 9 місяців тому +1

    This is actually sick

  • @chiralopas
    @chiralopas 4 місяці тому

    You just don't know for how many days I was trying to find something which can let me run AI stuff from vscode itself. Thanks!

  • @mufeedco
    @mufeedco 9 місяців тому

    This video is truly exceptional.

    • @sourishk07
      @sourishk07  9 місяців тому

      I'm really glad you think so! Thanks for watching

  • @HacknSlashPro
    @HacknSlashPro 6 місяців тому

    I make proper Gen AI and Agentic Framework Videos in Bengali, never got views in 3 digits, good that you chose English

    • @sourishk07
      @sourishk07  6 місяців тому +2

      Yeah I'm sure there's demand for Bengali content, but I suppose since more people speak English, it might be easier to get a larger audience. My Bengali isn't good at all so I don't really have an option haha

  • @hypernarutouzumaki
    @hypernarutouzumaki 7 місяців тому

    This is really great info! Thanks!

    • @sourishk07
      @sourishk07  6 місяців тому

      Glad you enjoyed it!

  • @GetJesse
    @GetJesse 3 місяці тому +1

    Good video. Do you have a follow video since this was 6 months ago?

  • @Gabriel50797
    @Gabriel50797 5 місяців тому

    Great video. Are you running Nala? :)

    • @sourishk07
      @sourishk07  5 місяців тому +1

      Thanks! Haven’t heard of this but will definitely look into it more

  • @didiktri6770
    @didiktri6770 3 місяці тому

    thank you so much. great video

  • @WorldMover
    @WorldMover 21 день тому

    What a fantastic video

  • @Snakebite0
    @Snakebite0 4 місяці тому

    Very informative video 🎉

  • @maliniv8043
    @maliniv8043 Місяць тому

    Excellent video

  • @joelg1318
    @joelg1318 5 місяців тому +1

    All i need for my AI Machine is the gpu im going for a dual 3090ti 24gb vram. AM5 x670e with gen5 pcie will support both cards with pcie bifurcation splitting the 5th gen 16x to 8x8x.

    • @sourishk07
      @sourishk07  5 місяців тому

      That sounds like a sick idea! Good luck with the build!

  • @alexandre.hsdias
    @alexandre.hsdias 5 місяців тому

    This video is a gem

  • @akshikaakalanka
    @akshikaakalanka 6 місяців тому

    Thank you Sourish!

    • @sourishk07
      @sourishk07  6 місяців тому

      You're welcome! I appreciate you tuning in

  • @JEM871
    @JEM871 9 місяців тому

    Great video! Thanks for sharing

    • @sourishk07
      @sourishk07  9 місяців тому

      Thanks for watching! Stay tuned for more content like this!

  • @yellowboat8773
    @yellowboat8773 5 місяців тому +1

    Hey man, wouldn't it be better to get an older 3090 with the higher vram? That way you get similar performance but more vram

    • @sourishk07
      @sourishk07  5 місяців тому

      Haha yes you're right. I've received a lot of feedback about this, which is why I've upgraded to two 3090s actually! All the software is still the same though.
      This machine is now my editing/gaming rig!

  • @novantha1
    @novantha1 9 місяців тому

    I'm not sure if I like the idea of an AIO or water cooling in a server context. If it springs a leak I think you're a lot less likely to be regularly maintenancing or keeping an eye on a server that should be definition be out of sight.
    I'd also argue that the choice in CPU is kind of weird; I would personally have preferred to step down on the CPU to something like a 13600K on for a good sale or a 5900X personally; they're plenty fast for ML tasks which are predominantly GPU bound but you could have thrown the extra money from the CPU (and the cooler!) into a stronger GPU. The exact price difference depends on the context, but I could see the difference being enough to do something a bit different.
    I also think that an RTX 4080 Super is a really weird choice of GPU. It sounds kind of reasonable if you're just taking a quick glance at new GPUs, the price to performance ratio is wack. It's in this weird territory where it's priced at a premium price but doesn't have 24GB of VRAM; I would almost say if you're spending that kind of money you may as well have gone for a 4090 if you need Lovelace specific features like lower precision compute or something. Otherwise, I'd argue that a used 3090 would have made significantly more sense, and you could possibly have gotten two of them if you'd minmaxxed your build, and a system with 48GB of VRAM would absolutely leave you with a lot more options than a system with 16GB. You could have power limited them, too, if that was a concern.
    If you were really willing to go nuts, in a headless server I've seen MI100s go for pretty decent prices, and if you're doing "real" ML work where you're writing the scripts yourself ROCm isn't that bad on supported hardware nowadays, and that'd give you 32GB of VRAM (HBM, no less) in a single GPU, which isn't bad at all.
    Personally I went with an RTX 4000 SFF due to power efficiency concerns, though.

    • @sourishk07
      @sourishk07  9 місяців тому +1

      Thank you so much for all of that feedback! Honestly, I agree with all of it, not to mention a couple other people also have commented similar things.
      But in my specific use case, my "server" is right next to my desk so maintenance should be pretty easy. Not to mention that I've really never really had any issues with AIOs for the 7 years I've been using them. Sure, a leak is possible but I guess I'm willing to take that risk.
      I think I might need to potentially switch this computer to be my main video editing computer and convert my current computer be the server because it has two PCIE slots.
      This was my first time building a computer from scratch solely for ML so I appreciate the recommendations!

  • @renegraziano
    @renegraziano 7 місяців тому

    Wow super complete information now I'm subscribed now on 😮

    • @sourishk07
      @sourishk07  7 місяців тому

      Thank you so much for watching!

  • @archansen8084
    @archansen8084 9 місяців тому

    Super cool video!

  • @AvatarSD
    @AvatarSD 9 місяців тому

    As an embedded engineer I using 'continue' extension directly with my openai api, especially gpt4-turbo for auto-completion. Seems my knowledge not enough for this world..😟
    Hello from Kyiv💙💛

    • @sourishk07
      @sourishk07  9 місяців тому

      Hello to you in Kyiv! I completely understand the feeling. With the field of ML/AI changing at such rapid paces, it's hard sometimes to keep up! I struggle with this often too

  • @ishanagrawal396
    @ishanagrawal396 3 місяці тому

    Thanks! Can you recommend few laptops for GenAI, AL, ML high end projects ? It would be of great help.

  • @leoliu5472
    @leoliu5472 4 місяці тому

    I too wish that 4080 super would be enough for machine learning, but you are "refining" the algorithm (more parameters). How is 16 GB of Vram going to cut it? 70 billion parameters used around 50-60 GB VRAM. Any suggestion?

    • @init_yeah
      @init_yeah 4 місяці тому

      quantization

    • @leoliu5472
      @leoliu5472 4 місяці тому

      @@init_yeah it is a trade off, thanks though

  • @BruceWayne15325
    @BruceWayne15325 4 місяці тому

    Thanks for the video. What kind of context window can your rig support?

  • @sohamkundu9685
    @sohamkundu9685 9 місяців тому

    Great video!!

  • @ricky33183
    @ricky33183 3 місяці тому

    Amazing guide, can I install my 3070 and 3060 together can they be used together?

  • @isaiahaucapina6651
    @isaiahaucapina6651 2 місяці тому

    Would it make a big difference replacing the Intel 9 for AMD?

  • @ericksencionrestituyo1802
    @ericksencionrestituyo1802 9 місяців тому

    Great work, KUDOS

    • @sourishk07
      @sourishk07  9 місяців тому

      Thanks a lot! I appreciate the comment!

  • @manojkoll
    @manojkoll 8 місяців тому

    Hi Sourish, the video was very helpful
    I found the following config on Amazon, how would you rate it. Plan to run some Ollama models and few custom projects leveraging smaller size LLMS
    Cooler Master NR2 Pro Mini ITX Gaming PC- i7 14700F - NVIDIA GeForce RTX 4060 Ti - 32GB DDR5 6000MHz - 1TB M.2 NVMe SSD

    • @sourishk07
      @sourishk07  7 місяців тому

      Hi sorry for the late reply, was busy working on my most recent video.
      I biggest thing I would check is if that is the 8GB or 16 GB variant 4060 Ti. Definitely avoid the 8 GB one at all costs. Also, consider buying a used GPU as sometimes you may be able to get good deals on those. The other specs look fine to me, as long as you think the price on Amazon is reasonable.

  • @agi_lab
    @agi_lab 3 місяці тому

    tailscale is epic. this video made me aware of tailscale. my router doesnt allow port forwarding so i can ssh from a different network

  • @punk3900
    @punk3900 8 місяців тому

    is this system good for inference? Llama 70b will run on this? I wonder whether RAM really compensates for the VRAM

    • @sourishk07
      @sourishk07  8 місяців тому +1

      Hello! That's a good question. Unfortunately, 70b models struggle to run. Llama 13b works pretty well. I think for my next server, I definitely want to prioritize more VRAM

  • @RazaButt94
    @RazaButt94 9 місяців тому +1

    With this as a secondary machine, I wonder what his main gaming machine is!

    • @sourishk07
      @sourishk07  9 місяців тому +3

      LOL you'll be surprised at this: my main gaming machine is an Intel 12700K and a 3080 12 GB. ML comes before gaming 🙏

  • @electronicstv5884
    @electronicstv5884 8 місяців тому

    This Server is a dream 😄

    • @sourishk07
      @sourishk07  8 місяців тому +1

      Haha stay tuned for a more upgraded one soon!

  • @punk3900
    @punk3900 8 місяців тому

    Hi, what is your experience with this rig? Is it not a problem for the temperature that the case is so tight?

    • @sourishk07
      @sourishk07  8 місяців тому

      The temperature has not been an issue with the same case size

  • @everything.Nothing
    @everything.Nothing Місяць тому

    Türkiyeden selamlar faydalı video için teşekürler

  • @maaheedgaming4055
    @maaheedgaming4055 6 місяців тому +1

    Please make some projects to learn from you bro.

    • @sourishk07
      @sourishk07  6 місяців тому +1

      Will do! Feel free to check out my DDQN implementation or my NeRF videos until then!!

  • @kawag2780
    @kawag2780 9 місяців тому

    Could have started the video with the budget you were targeting. When recommending systems to other people, knowing how much the person can spend can heavily dictate the parts they can choose.
    Here are some questions I've thought of while looking at the video. Why choose a 4080 over a 3090? Why choose a gaming motherboard or one that is a MITX formfactor? Why choose a "K" SKU for a production focused workload? There's missed commentary there.
    I know that you have tagged some of your other videos but it could have been better to point out that you already have a NAS tutorial. Linking that video with the introduction of the 1TB SSD would have been helpful.
    And finally why is the audio not synced up with the video? It's very jarring when that happens. Other than that it was cool to see the various programs that you can use. However I feel that the latter part feels tacked because it's hard to gauge how the hardware you chose has an affect on the software you chose to showcase.

    • @sourishk07
      @sourishk07  9 місяців тому

      Wow, thank you so much for your in-depth feedback! I sincerely appreciate you watching the video and sharing your thoughts. I apologize that the video didn't initially clarify some of the hardware choices and budget considerations. In retrospect, you're absolutely right, and I'll ensure to include such details in future content.
      I chose the 4080 Super because it has the newest architecture, along with the fact that I was able to get it at a discount. The extra VRAM from the 3090 would've helped with larger models like LLMs and Stable diffusion, but for a lot of my personal projects such as training a simple RL agent or even some work with computer graphics, the extra performance of the 4080 Super will serve me better. Again, something I should've added to the video.
      For the "K" SKU, I got the CPU on sale at Best Buy for about $120 off and the motherboard has an "AI overclocking" feature, which I thought would be kinda on brand with the video lol. I didn't really get a chance to touch upon it in the video or even benchmark any potential performance gains the feature might've gained me. Regarding the SFF build, I chose the form factor just because I have a pretty small apartment and I don't have much space. These are things I'm sure the viewers of this video might've been interested to hear about, and I appreciate you inquiring about them.
      I also agree with your point about my NAS video! I'll keep that in mind the next time I mention a previous video of mine.
      And regarding the audio, everything seems fine on my end? I've played the video multiple times on my desktop, phone, and iPad. Hopefully, it was just a one-off issue. Also, I suppose the software I installed isn't really too dependent on this specific hardware, but rather its the suite of tools I would install on any machine where I plan on doing ML projects.
      Thank you once again for such constructive feedback. I'm curious, what topics or details would you like to see in future videos? Your input helps me create more tailored and informative content.

  • @SamKhan-kb3kg
    @SamKhan-kb3kg 5 місяців тому +3

    how much did it cost you

  • @danielgarciam6527
    @danielgarciam6527 9 місяців тому

    Great video! What's the name of the font you are using in your terminal?

    • @sourishk07
      @sourishk07  9 місяців тому

      Thank you for watching! The font is titled "CaskaydiaCove Nerd Font," which is just Cascadia Code with icons added, such as the Ubuntu and git logos.

    • @Param3021
      @Param3021 8 місяців тому

      ​@@sourishk07 ohh, i was literally finding this font from a long time, will install it today and use it.

    • @sourishk07
      @sourishk07  8 місяців тому +1

      @@Param3021 Glad to hear it! Hope you enjoy! It works really well with Powerlevel10k

  • @JayG-hn9kf
    @JayG-hn9kf 9 місяців тому

    Great video, I never got the Continue extension working in code-server, Is there a step that I may have missed ?

    • @sourishk07
      @sourishk07  9 місяців тому +1

      Thanks for watching! And regarding the Continue extension, what is the issue you're running into?

    • @JayG-hn9kf
      @JayG-hn9kf 9 місяців тому

      @@sourishk07 thank you for offering support 🙂I have followed exactly your steps , however I don't get Continue text zone to ask question , not even the drop list to choose the LLM or setup. I tried Continue Release and Pre-release but both did not work. I the fact that I have Ubuntu Server running as a VM under Proxmox with GPU passthrough could have an impact ?

    • @sourishk07
      @sourishk07  9 місяців тому

      I don't believe the virtualization should affect anything. When you go to install the Continue extension, what version are you seeing? Is it v0.8.25?

    • @marknivenczy1896
      @marknivenczy1896 8 місяців тому

      I've tried twice to post help with this, but the youtube does not like me adding a url. Anyway, I found I needed to run code-server under HTTPS in order for Continue to run. If you open code-server under HTTP it will issue an error (lower right) that certain webviews, clipboard and other features may not operate as expected. This affects Continue. You can find the fix by searching for: Full tutorial on setting up code-server using SSL - Linux. This uses Tailscale which Mr. Kundu has already recommended.

    • @sourishk07
      @sourishk07  8 місяців тому

      Thanks for sharing this insight! I probably should've specified that I set up SSL with Tailscale behind the scenes to avoid that annoying pop up message. I apologize for not being clearer!

  • @club4796
    @club4796 6 місяців тому

    can we able to play games from this server remotely like playing AAA games on ipad or macbook?

    • @sourishk07
      @sourishk07  6 місяців тому

      Yes, you would be able to, although using Windows + Parsec, or some sort of hypervisor might make things easier than natively gaming on Linux.

  • @mrb180
    @mrb180 6 місяців тому

    how do you get vscode to have this modern looking, curvy UI? mine looks nothing like that

    • @sourishk07
      @sourishk07  6 місяців тому +1

      The font that I use is Cascadia Code and the theme that I use is Material Palenight!

  • @nessim.liamani
    @nessim.liamani 4 місяці тому

    It sounds like you've got an amazing project set up for a machine learning server, and your breakdown of the entire build-from the hardware components to setting up the software environment-is incredibly detailed and helpful! You're essentially creating a high-powered ML server to take on intensive tasks while keeping your main computer free for gaming and other activities.
    Some key takeaways from your setup:
    - **Hardware Configuration**: You've selected a powerful combination of the Nvidia GeForce RTX 4080 Super, Intel 14900K, and 96 GB of RAM, which makes this machine incredibly suitable for deep learning, large model training, and inference tasks.
    - **Ubuntu Server 22.04 LTS**: Opting for the server edition without a desktop environment ensures that all available resources are dedicated to ML tasks, minimizing unnecessary overhead.
    - **TailScale for Remote Access**: Installing TailScale was a great idea for easy remote management, especially since it allows you to access your server from anywhere.
    - **Nvidia Driver Setup**: Downloading the drivers directly from Nvidia was the right move for ensuring compatibility, especially since ML tasks require stable GPU drivers to fully utilize the CUDA cores.
    - **Using Llama Models for Local LLM Tasks**: Llama and its variants are an excellent choice for running natural language models locally, especially with quantized versions that use less VRAM.
    - **Code Llama for Coding Assistance**: Running Code Llama on the server to assist in coding through VS Code is a smart way to offload compute-heavy operations while still getting real-time assistance in your IDE.
    - **Docker with GPU Support**: Installing Docker with Nvidia's container toolkit ensures you can easily run different ML frameworks like TensorFlow or PyTorch in isolated environments without worrying about dependency conflicts.
    - **Isaac Gym**: Incorporating Nvidia's Isaac Gym for reinforcement learning environments shows how versatile your build will be-capable of simulating robotic environments efficiently.
    - **TensorFlow Setup**: Walking through the Cuda and cuDNN setup ensures your server can handle TensorFlow-based workloads, adding even more flexibility to your ML setup.
    This setup will give you the ability to handle a wide range of ML tasks from natural language processing to video generation, all while keeping your main machine ready for other activities like gaming. The detailed walkthrough of your hardware and software choices really sets the stage for some exciting projects ahead.
    Good luck with your future ML endeavors, and feel free to share updates on how it performs with different projects!

  • @moadtahri6605
    @moadtahri6605 2 місяці тому

    I have a dell power edge r530 2Tb and 128Gb with windows server 2016. Any ideas to use it as Ai server

  • @matthewmonaghan9337
    @matthewmonaghan9337 6 місяців тому +57

    why would you use intel, the cpu is going to destroy itself in 3 months

    • @slazy824
      @slazy824 6 місяців тому +5

      Set core p1 and p2 to Intel spec and use proper cooling and then problem solved.

    • @TekTakes
      @TekTakes 6 місяців тому +7

      ​@@slazy824No it won't be solved. Even running at super conservative clock speed still degrades the cpu.

    • @sourishk07
      @sourishk07  6 місяців тому +11

      I have faced some instability issues with my CPU so far, but the funny thing is that by disabling XMP, everything is working. I actually have ASUS's AI overclocking feature enabled with no issues. To be honest, this totally might crap out my CPU but hopefully Intel can push the microcode update soon

    • @zhou0001
      @zhou0001 5 місяців тому

      Recently there are talks about Gen 14 and 13 intel processors having overheat issues especially when overclocked. Do you think you may be facing similar problem?

    • @sourishk07
      @sourishk07  5 місяців тому +1

      @@zhou0001 That definitely is a possibility. I have indeed started to experience weird instability issues so I've already submitted an RMA request haha

  • @sinamathew
    @sinamathew 6 місяців тому

    I love this.

  • @alirezahekmati7632
    @alirezahekmati7632 8 місяців тому

    GOLD!

    • @sourishk07
      @sourishk07  8 місяців тому

      Thank you so much!

    • @alirezahekmati7632
      @alirezahekmati7632 8 місяців тому

      ​@@sourishk07 that would be greate if you create part 2 about how to install wsl2 in windows for deep learning with nvidia wsl drivers

    • @sourishk07
      @sourishk07  8 місяців тому +1

      @@alirezahekmati7632 From my understanding, the WSL2 drivers come shipped with the NVIDIA drivers for Windows. I didn't have to do any additional setup. I just launched WSL2 and nvidia-smi worked flawlessly

  • @metafintek
    @metafintek 3 місяці тому

    Why would you build with lower vram?

  • @reverse_meta9264
    @reverse_meta9264 3 місяці тому

    No thermal paste on CPU?

  • @vauths8204
    @vauths8204 6 місяців тому

    I didnt see thermal paste on that processor. does it not need it?

    • @kashyapkshitij
      @kashyapkshitij 6 місяців тому +1

      The thermal paste comes pre applied on the cooler.

    • @vauths8204
      @vauths8204 6 місяців тому

      @@kashyapkshitij oh that's sick I haven't attempted this myself. good stuff

    • @kashyapkshitij
      @kashyapkshitij 6 місяців тому

      @@vauths8204 All good, just don't forget to peel off the sticker when you do.

  • @LouisDuran
    @LouisDuran 6 місяців тому

    How is your i9-14900K holding up for you?

    • @sourishk07
      @sourishk07  5 місяців тому

      I think I might need to RMA it tbh. I'm definitely facing some instability. Wouldn't recommend rip

  • @Marioz08
    @Marioz08 2 місяці тому

    Has anyone tried this in a VM yet? I want to do this but I host my systems in a data center and want to know if it will cause too many issues running via something like Proxmox or VMware ESXI. I want to run from there just in cause OS crashes I can just reboot via the hypervisor and not need to drive to the system each time.

  • @alvaromorales5967
    @alvaromorales5967 6 місяців тому

    Could it be made in windows with WSL?

    • @sourishk07
      @sourishk07  6 місяців тому

      Yes it can! I love WSL because if you have your NVIDIA drivers for Windows installed, your WSL instance will have them too! Same goes for Docker.
      Some of the setup steps might be different for WSL, so definitely be sure to look out for that

  • @jetman-x4e
    @jetman-x4e 5 місяців тому

    Why not rtx 3000 ada generation or 4000 or 5000 even rather than 4090 ?

    • @sourishk07
      @sourishk07  5 місяців тому +1

      Hey those are valid choices as cards. In this video, I should’ve considered those and probably chosen a better card than the 4080. I was more so focused on the software here

  • @GodFearingPookie
    @GodFearingPookie 6 місяців тому

    Subscribed

    • @sourishk07
      @sourishk07  6 місяців тому

      Haha thank you so much! Stay tuned for more ML content!

  • @Four_Kay
    @Four_Kay 6 місяців тому

    Had the Sick set up but is the 1TB.

    • @sourishk07
      @sourishk07  6 місяців тому +1

      Lol yeah fair. But 1 TB hasn't posed an issue yet. I have my NAS mounted on the server so I can easily offload any large model files that I'm not currently using, which makes 1 TB much more usable

    • @Four_Kay
      @Four_Kay 6 місяців тому

      @@sourishk07 Yeah,If it works for you you are good....🥲

  • @dani5052
    @dani5052 2 місяці тому

    Does it work with AMD graphic cards?

    • @mrtedi1593
      @mrtedi1593 2 місяці тому

      For machine learning unfortunately no

  • @TunjungUtomo
    @TunjungUtomo 4 місяці тому

    Can you please upload a Shorts video of when your rig is in full swing training some data? I want to know how noisy would your setup be

  • @fukra_lnsaan
    @fukra_lnsaan 21 день тому

    And people say bengali's are sleeping 😭🤣🤣🤣... Kudos to you dear

  • @VincentStoessel
    @VincentStoessel 2 місяці тому

    What was the cost?

  • @Faheem1988
    @Faheem1988 3 місяці тому

    Why not xeon or threadripper

  • @T___Brown
    @T___Brown 9 місяців тому +1

    I didnt hear what the total cost was

    • @sourishk07
      @sourishk07  9 місяців тому +4

      Thanks for the comment. While focusing on the small details of the video, I completely forgot some of the important information haha. The cost pre-tax was $2.8k although components like the motherboard do not have to be as expensive as what I paid. I was interested in the AI overclocking feature, but never got around to properly benchmarking it. Anyways, I've updated the description to include a Google Sheets with a complete cost breakdown.

    • @T___Brown
      @T___Brown 9 місяців тому +2

      @@sourishk07 thanks! This was a very good video. Thanks

  • @ketankbc
    @ketankbc 7 місяців тому

    Where is CPU cooling gel?????

    • @sourishk07
      @sourishk07  7 місяців тому

      If you mean the thermal paste, the CPU cooler came with it pre-applied!
      Otherwise, the AIO has its own coolant that it comes with inside to cool the CPU.

  • @aadilzikre
    @aadilzikre 8 місяців тому

    What is the total Cost of this Setup?

    • @sourishk07
      @sourishk07  8 місяців тому

      Hi! The total cost was about 2.8k although some parts I probably should’ve gone cheaper on like the motherboard. I have a full list of the parts in the description

    • @aadilzikre
      @aadilzikre 8 місяців тому

      @@sourishk07 Thank you! I did not notice the sheet in the description. Very Helpful!

  • @abhiseckdev
    @abhiseckdev 9 місяців тому +2

    Absolutely love this! Building a machine learning rig from scratch is no small feat, and your detailed guide makes it accessible for anyone looking to dive into ML. From hardware selection to software setup.

    • @sourishk07
      @sourishk07  9 місяців тому +2

      Thank you so much!!! I appreciate the support 🙏

  • @victorhenostroza1871
    @victorhenostroza1871 3 місяці тому +1

    I am waitint the rtx 5090 to run llms