Does it even really need an IP, or a physical connection? What about something like an Xorg display server, that apps on the same machine can talk to through a socket? It's surely just something that "serves" requests, but that maybe too broad and I guess they implicitly meant typical industry server hardware here.
CompTIA is technically correct. In the early 90s, engineers would do things like set up microcontrollers with just enough smarts to run an IP stack so they could telnet in to some or other TCP port to see how much a coffee pot weighed and thus, whether or not somebody needed to make a new pot. That's a server in action.
When introducing someone to server hardware, I use an analogy that covers most of the basics: If a consumer computer is a pickup truck, then a server is a semi-truck. You operate both of them in a similar way: steering wheel, pedals, fuel. You can use them to accomplish the same/similar tasks: hauling various loads. But a semi-truck is driven a little bit differently; it has more gears; it can haul more; it is more resilient and can go longer without stopping. In some ways, it is built to be easier to maintain. But underneath all of the nuance, it has the same basic form and function as any consumer device. There are also software nuances, but you can also run most server OSs on consumer gear anyway. So, I focus on the difference in hardware.
After 25 years of using desktop hardware for my home server (upgrading and replacing over the years), I finally put together a real-deal server. 2 CPUs, 72 threads, 96gb R ram (128gb on the next powercycle), 5 drives with 2 raid arrays, 4U case. It's GLORIOUS!!
I grabbed be an ex-business CPU+MOBO+RAM combo from eBay (there a lot on there, and /r/homelab and /r/datahoarder have recommendations about which to use). Thinking about upgrading my 2cpu, 40 thread, 128gb RAM server to one of the Epyc options available, which can be got for as little as 1k for a 32c Rome + MOBO + 128GB RAM, since I now have 180TB of space.
@@I_Am_Your_Problem hosting your own Netflix (Jellyfin), Google cloud (Next cloud), backups, surveillance. It's awesome. Also the learnings directly transferred to my job.
I moved to this about 4yrs ago. Got a Dell T110 series tower server. Its like a huge PC but almost silent to great for home use. And it has tons of drive bays, so was great to max out with storage and run ESX for all my VM's. At one point it had been on for 2.5yrs without being rebooted. Recently re-worked the whole thing and moved it onto Proxmox which was daunting at the time but now love it. Even seems more efficient at resource handling.
what a coincidence.....!!! just came here after watching STH's "MAX Fanless 10Gbase-T Mini PC and Router" video to know that Patrick from STH has helped in this video....!!!
Also, hot swappable drives, dual redundant power supplies, ILO board (integrated lights out), multiple NICs, the list goes on. Oh yeah, and lots of noise.
@@JJFlores197 Google does. I have seen them first hand. I'm an electrician and worked for a company that built one of their data centers from the group up, then stayed on for 3 months for QC. I have also worked in other data centers. Theirs is on a whole other level.
LTT actually did a video on how much system RAM can Google Chrome use. I think the OS stopped around 12GB of usage of RAM by just google chrome tabs. Right now Google Chrome seems to require more system RAM than when they did that video too but if you turn on performance in the browser settings then the system RAM required goes down a lot.
The difference is function not form. We had a dozen old desktops which used to function as PCs that we started using as print servers (of sort...it's complicated). They were formerly developer desktops and, when they were too old to be used as such we just started using them as interfaces between mainframe and printers (again, it's complicated). Nothing in the HW was changed. The function was changed. I've used old PCs as servers as well at home with no or little HW change so again, it's function, not form.
The only difference between a PC and a server is its purpose. If it connects to other computers, it’s a client, if it accepts connections, it’s a server, it’s as simple as that.
I'd say redundancy plays a big part and only if you deploy servers in numbers. Servers can take decades worth of beating before falling over while desktop components will just fail taking whole thing down. Servers have a lot of redundancy built in. A NIC, PSU, random fan, stick of RAM, whatever dies, when the environment is set up right, just let the host shutdown, fail over, and swap out the hardware and reboot it. Sometimes you can even just hot swap most of it. Depending on how your cabling job in the back of the rack is lol
@@Baulder13I don't think that what you said is necessarily important for a PC to fit the definition of a server. I connected an old 2014 cheap laptop to my router and forwarded some ports, I host a website, and a VPN, a password manager and other small services on it. By all practical means this is a server, but it's not the big racks upon racks of compute power with the heaps of redundancy you imagine.
They should have specified they mean dedicated or high performance servers. An old gaming PC, laptop or Raspberry Pie running some services are also servers, just not rack mounted, hpc machines.
Well... servers are designed for heavly I/O and a ton of RAM and disks Server processors usually are 3-channel instead of 2 as a PC Processors have lower clocks and much more cores, and uses a reliable architecture (client PCs usually uses the last architecture, and Xeon for example takes a long time to switch to a new one) Motherboards and power supply are proprietary instead standard... Usually they have iDRAC or some remote technology for accessing it... PCs for enterprise have vPRO/AMT, but is not the same OOB.
Allow me to clarify. "Configuration", has two aspects: 1. The hardware. 2. And software. The same "PC" (Personal Computer) with the same hardware (although not ideal) can play the role of a "Personal Computer" or a "Server". So, if the hardware part of the configuration doesn't define the machine's role or purpose , then what does? The other part, the software. Any computer, regardless of its platform (hardware + operating system) can be a server if it has a "server software" on it. Your smartphone can be a server. That is the key point. A "server" is any computer that has a server software on it that is listening to incoming requests to "serve" (regardless of the requests nature: web, ftp, printing, etc.) The hardware part is an optimization and efficiency factor not a role defining factor. I hope this helps.
I mean technically server and pc are just applications, use-cases. Yes, a computer can be designed to run server or pc tasks more effectively, but you can also just use a pc as a server, for example a cheap Minecraft server for your buddies from an old office tower. Or perhaps picking up a decommissioned server to use as a pc, and just throwing a gpu in it. But, this was still probably the most thorough video explaining the differences yet.
dear linus, please push this over to the main channel to roughly educate your viewership on the difference between desktop > workstation > server > computing cluster. (and while you're at it: a Mac is still a PC, as is a Raspberry Pi, or stone-age Personal Computers like a C64.)
A server is a device that primarily runs server software (not client software). So if you run a dedicated game server on an old laptop/raspberry pi that qualifies as a server because it's providing a service for the connected clients but this falls under home servers. I think this video is aimed at explaining business or enterprise server configurations.
Memory errors in gaming PC's are more common than most people think. A single bir error, what ECC can detect and correct, doesn't necessarily mean that a program will crash. If you look at a gaming PC the executable code in memory is not all that large. Most memory is used for graphical elements, sounds, game maps and so on. Code that isn't really containing executable code. If a single pixel in a graphical element is the wrong color or a single sample of a sound has a bit error there is little risk that the game or OS will crash. In a lot of cases you won't even notice anything more than a blip in the sound, a strange frame in a video or a single pixel in some texture that has the wrong color. The code running through the processor is pretty damn small in memory compared to all the other assets used by the OS, the programs you use or the game you are playing. Now I did say that ECC can detect and correct single bit errors, and that's the bog standard ECC that's been with us for many decades. It will detect dual bit errors and more, but it can't correct them. A single bit error is something you will not notice when using a machine with ECC memory. All that happens is that there will be a note in the system log saying an error was detected and corrected. One such message is not worth thinking about. it's just ECC doing what it is supposed to do. A real problem is however there if you start seeing numerous ECC messages in the log. If they are occurring several times per second or so the log in Windows will just have a message that errors has occurred and been corrected but as they are so common reporting of them has been paused after some number has been reached. It's been years since I worked with this but I think it was in the two hundred errors before pausing. If this happens there is a real problem that should be addressed ASAP. The thing is that if a additional bit error occurs so there are two or more bit errors on the same read the OS will stop the machine as ECC cant repair the error. So why does bit error occur? Not all bit errors need to mean there is a real memory problem. Most of them are simply generated as particles of cosmic energy trike a memory cell and manages to change the value, making a "1" a "0" or the other way around. This is just things that happen, and we really can't shield computers against them. These particles can go through the earth so a lead shied or something like that wouldn't make much difference. So we use ECC memory in servers to try to keep the bit's in order. Now there are more advanced ECC and memory protection techniques. I remember that many years back some chip manufacturer designed a ECC based technique that would detect and correct up to dual bit errors. I can't remember the details though. There were also one that would map memory as unusable if it detected major errors in a certain memory segment. The intention was that if a machine crashed because of memory errors that the ECC couldn't correct it would map that memory as unavailable when the machine was restarted making it usable but with limited memory. I think it deactivated a whole DIMM at a time, but again this was a long time ago and I can't remember the details. Also this was way back when the chipset on the motherboard contained the memory controller. Unlike today the chipset manufacturers could invent their own ways to protect memory, something that they can't do today when the memory controller is integrated in the CPU. Some really interesting solutions has been lost with the new modern processors. But the added performance is considered to be worth it. Today it's the processors that has to support ECC, something Intel is bad at doing for their desktop processors. The Ryzen processors from AMD is a bit better at this, but even then not all models will support ECC and a lot of motherboards also lack ECC support.
So this is the difference between data center and consumer not PC and server. It's easy to conflate the two. But they aren't the same. Anyone with a NAS or a wifi printer has a server on their home network. Server is just any machine that provides a service to other PCs on the network.
This is 100% enterprise server vs pc hardware, without that distinction, this is full of errors, Not the first time LTT made a video that was wrong/misleading
There are a few points you completely left out. POST: The big manufacturers build all of their servers and proprietary components--none of them will even accept you installing comparable parts, like DIMMs, they check the firmware at POST and if the DIMM doesn't have, for example IBM firmware, the system board will simply switch the DIMM off and report it. All components communicate with the system board and reports diagnostic information at every event. System board have their own system-on-a-board management modules build in, which operate independently of the system board. The start before the system board starts, in fact they control the system board even starting and monitor the system board. These one board systems generally run on some proprietary Linux OS which starts up as soon as power is provided to the system board. Once they have started up, they start the system board, if configured to do so. They can also be configured to NOT start the system per default, or only if the system were shutdown correctly, so that a defective system which crashes cannot try to start itself again , until all events habe been fixed. This prevents servers from thrashing. These system-on-a-board modules have different names, depending on the manufacturer and model and generation, but they all fall under the category of LOM (Lights Out Management), meaning that much of the maintenance of servers can be done over intranet connections, allowing to manage even shutting down and restarting a server, which used to require somebody to physically flip a switch on the server, but which can not be done electronically. The only reason anyone need to actually step foot inside a data center, is to replace physical parts, like disk drives or DIMMs. The LOMs also allow for having the servers monitored by applications like HP OpenView, which monitors attached systems both passively and actively, and can be found in practically every professional data center in the world. Those are three very major things which differs servers from PC in professional environments.
@@ganjasage420 I don't know why you have your panties in a bunch. In professional IT, these aspects of server management are paramount. What I expected has nothing to do with anything. Linus demonstrated that either he doesn't know about server management, or he didn't find it important enough to report on. In either case, as a professional with 35 years in IT, I presented what I thought was interesting and important.
@@DerOrso because I'm tired of people making Linus out to be some face for computer technology. Without a team this dude has no more knowledge then someone like me who plays around with computers for fun. He is certainly not a face for tech. As I said, you expected them to know anything you know given your 35 years in IT? They could've asked people like you in the field for factual information, instead they make videos with incorrect information. Him and his team try to cover any aspect of computer tech with whatever jargon they find while having a narcissistic attitude in videos that they know better then the people in those fields as jobs. So yes I do have my panties in a bunch whenever I see his mug and hear him speak.
@@ganjasage420 I in now way disparaged Linus for not reporting on the points I filled in. I filled them in factually without comment on Linus at all, simply to fill out the points he didn't cover. That you think you can exactly state what Linus knows and what not show only one thing, that you have no idea what you are talking about and are an unworthy interlocutor.
Not only the hardware but also the software. I used to work for the USA's largest bank which has pretty much all of its processing here in the UK (yeah thats odd). Anyway they have the biggest DC in the UK which is the size of 3 football fields. It is insane. Every server runs a special core of Linux and is immeadiately absorbed into a massive computing pool once they come online. So those 2000 + physical servers are all effectively one. It was super clever and very interesting to work with.
What is interesting is laser fiber optics (Repeaters, fiber optic amplifiers) yes I took Photonics in college. Needs a specific power laser with IR lasers. It not cost effective or efficient but allows for fast internet. The history of fiber optic cables is neat too.
Your PC can be a server. Just depends on configuration of hardware and operating system. Also, intended use is part of what determines it is a server or not.
7:00 those passive C-Payne bifurcation boards are pretty nifty - i've used a similar one, a few years ago. Afaik, they're offering actively switched ones(ASMedia / PLX bridge) as well.
The easiest way to do server for like home use or your own Plex, is a mini PC with an i5 10500t like hp elite desk which can be had for $120ish, then grab a das and fill it up with drives and their you go.
The pace of this video is 1.5x what my brain can comprehend. I know tech stuff but wow this was fast pace. So many things thrown at me at once. Love techquickie but slow it down for the future just a bit.
Another key difference is remote KVM and IPMI integration. No one wants to stay close for long periods of time to these space heaters with turbines, or travel far just to turn it off and on again.
what a coincidence.....!!! just came here after watching STH's "MAX Fanless 10Gbase-T Mini PC and Router" video to know that Patrick from STH has helped in this video....!!!
TLDR, the software, that's it. The hardware configuration is completely dependent on use case, a workstation will be more like a server but in your office. A remote gaming server can use all gaming kit and be in a rack. There is no fundamental difference outside of software.
Fundamentally a server is a device that provides a service. If your smartphone shares a file with other phones it is a server. But we need devices to solely act in that role which is where the hardware mentioned in the video comes in. You can still have a pc and the home grade hardware work as a server, many do and linus has done vids with those builds.
4:40 speed chart. the further you go, the faster and more expensive it gets. Hard Drives < 2.5 SSDs < NVME SSDs < Slow RAM < Faster RAM < L3 Cache in CPU < L2 < L1 that's all i know.
Architecture used to be different, too. x86 machines rule the roost now but most Unix and Unix-like boxes weren’t x86 until sometime after the turn of the century. I have a uVAX II and ostensibly, it was a server at some point. But, I think the exception to the x86 might be IBM’s mainframes, now. Or ARM.
When I started into computers. Mid 91. It would basically consist of Novell Netware 3.11 and the hardware was mostly the same as the workstations depending on the year. Normally you would have more ram, larger hard drives, and larger processor. Depending on the amount of drives you had. Probably a larger tower. When I left computers. We was building server using intel server boards with dual PIII, scsi drives, server case and ecc ram.
Putting aside all the semantics and the 'everything is a server if used as a server' narrative, the basic difference (which is mostly hard-coded in the motherboard hardware) is which subsystem utilizes the processor's most powerful input-output-bus. In a PC, it is the graphics subsystem, cause in a pc you have a screen where you want to have beautiful gaming and other graphicals for your money. In a server it is the data storage subsystem, as needed to serve incoming requests. That makes a PC a bad server, and vice versa.
Great episode. Here's something I have on my mind: when I was in highschool in the mid 2000s the school's computer lab had a server. Why?! I mean, all of the 20-30 computers in the room were normal PCs, just like ones you might have at home - meaning, they weren't some kind of a dumb terminal. So why did we need a server? Is it possible that it was just a misnomer (or old nomenclature) for some sort of a (non-wireless) internet router?
Your school most likely had a Windows server. With Windows Server, you can setup a Windows domain environment which is used in businesses and the enterprise and even in schools. In short, Active Directory lets you manage computers and user accounts from a centralized location. So if you think about it, you most likely had a unique username and password for yourself. And you likely were able to sign into any school PC with that username/password. That's possible thanks to Active Directory. All the computers are joined to that domain and can access resources that are made available to students. And if you think about it further, it would be impractical to create a user account for every single student on every single computer manually. That's one of the many reasons they use a directory service like Active Directory. Its possible that server also acted as a file server for shared network drives or as a DHCP/DNS server among many other functions.
@@JJFlores197 Thanks a lot! I think it makes sense, although I don't remember us having to enter any username or password. However, access to files - yeah, that makes quite a bit of sense.
the RAM part is why i would want to get old server stuff for cheap... having a RAM drive with games installed in is something i always wanted to try! (modern games that is, cause for old games even a M.2 is instant loading.)
Currently I am preparing to build a server for my homelab. Recently I bought Supermicro X11DPi-NT motherboard for 2 CPUs, want to shove there some higher clocked Xeon with many threads, so I can have beefy machine. Will serve as NAS, HTTP fileserver, NVR, some multiplayer game server (UT2004? :D) and in the future maybe a router based on pfSense or OpenWRT. Also for trying some AI based stuff, after I install Nvidia GPU later . Still waiting for some components to arrive though. Some time ago was thinking about purchasing an older DELL Poweredge R530 or something like that,, but with upgraded configuration, it could cost nearly the same as to build it myself.
I always like the comparison: A server to a desktop is similar to comparing a truck to a car. Essentially different form factor and better prepared for heavy duty usage.
Google used to run on desktop-grade PCs at the beginning in the 2000s thou, they saved a ton of money on that and were able to deploy across the globe.
Oh, hope he talks about server workstations. Been dreaming of owning one for various AI things. You know how some look at boats, cars, homes, etc they won't ever own but dream of? Yeah, that is with me with server workstations since they cost 50-100s of thousands depending the custom build.
I used to be an IT guy at a non profit. We used AMD based 486 off the shelf parts repurposed from office computers in a regular case in our NOVELL 4 servers. They never went down. That was in the 90s.
Yhea server hardware is efficient in the data centre, but at home an older PC with a low end video card or a raspberry pi are fine to run a webserver or nas. Also the server hardware starts to become power efficient in the data centre, not so much at home. That is because in the data centre there are enough tasks to occupy server hardware. I don't need an EPYC based server at home (too much space, cost, and power usage).
Biggest difference is servers are designed for workflow, crunching numbers etc but one main thing they tend to suck at is pc gaming, while a gaming pc has multiple functions the one think it excels at is pc gaming, but given the right specs it can be a Swiss Army knife, jack of all trades, easiest way to break it down, servers = work while gaming pc = playing games with the added benefit of doing other things also price is a big factor!
A server is a software that offers a service another software (client) connects to and then uses. (In the context of IP networks: A sever would have an IP address and port.) What kind of hardware you put under this service is up to the decision how important this service is and what is affordable.
I'm pretty sure that those clocks on server CPU as lower for a lot more reasons. I find the number at around 3.6 GHz quite constant even among consumer units.
That's a function of the software, though, not the hardware. A desktop PC from the 2000s can be put into service as a web server. It won't answer a million concurrent requests at reasonable speed, but it can definitely handle more than one at a time.
recently there's a push towards a compute density, even if it ruins the efficiency, in the AI race nobody considers the long term costs, but not having to open a new datacenter helps save a lot of time
My media server (unraid running Plex) just has ALL desktop components, but to keep power consumption down (and save space), it has a G series Ryzen cpu, so I dont need a GPU. It has an Asrock B550 phantom gaming 4 mb (lol) and 32GB Corsair Vengeance RAM, which is overkill, but whatevs. Its all inside of a rackmounted 3U case. Been up-and-running for a little over a year w/ zero issues.
Video suggestion - under a possible new "What's the difference Video Series???". As an old-school elder, I've never been a huge fan of mobile phones, but even to me, it's fairly common knowledge that you should "ideally" charge your phone from 20% to only 80% to help the battery last longer over time. So imagine my surprise, when my new Dyson Vacuum documentation recommends to me, to run the battery down to zero once a month or so, to help the battery last longer. It immediately sounded like a scam to me not knowing a lot about battery life - so you have to buy a new Dyson replacement battery when the old one dies, due to running it from 0% to 100% too often. But if it's NOT? a scam - then how about a video that compares the differences between PHONE / VACUUM / & SAY EV CAR BATTERIES, especially from the point of view of trying to extend their life as much as possible, and how that might differ from one type of battery to the other?
This video should have been titled, what's the difference between server and pc hardware. Old pc hardware can be used for a server. The difference is function, not hardware.
There are also servers that can run on desk for.. consumers and small businesses. I think today many servers can be PC, and many "PC" isn't really PC, it is more like walled garden console with all kind of stupid restrictions. For me, PC is Personal Computer. I can bought it to home, and I control it completely.
No, they are not more reliable but failure safe. That is why they have mechanisms of working past failed components (be them as reliable as they are or not), e.g. two PS, RAID, etc.
I would love a tech quickie focused on NPUs. No, not the AI ones. The network processing units. Those big bulky boxes for data centers focused on their networking
No mention of fault tolerance in parts, multiple power supplies, raid cache batteries, raid controllers and disc arrays with multiple redundancy. Also no mention of IPMI or iLo(HPE iLo is actually amazing for remote management).
Double ported enterprise storage so that there's basically two computers attached to it in one enclosure, so that even if the server completely craps the bed it can still keep on ticking with the other half...
None of these are requirements for a computer to be considered a server. HP, Dell and many other vendors sells server devices with... none of them. The bottom third of the Proliant series contains none of them. What on earth is a "disc array"? Is that when you have multiple DVD burners in RAID or did you mean to say "disk" with a k? Stop. Think. Post.
Soon we'll be seeing videos in a tiny line. Why not keep the resolution adapted to existing screens? It's the same with movies. We have 16:9 screens and they film with ultra-imposing black bars. I don't understand the point of filming between 2 black lines... :)
Hey TQers. To clarify, our community post yesterday says "soon" and not "immediately" :)
Reading comprehension is apparently a skill not many people have mastered.
@@Sassi7997 This comment can't stop me, I can't read
hi linus
@Sassi7997 the problem isnt that we cant read its that why is the change happening in the first place
We dont care about that but why is the change happening in the firat place
According to CompTIA and Network + exam a server can be "anything that can be assigned an IP address"
Gotta love using an Xperia Arc as a NAS, best Server experience so far
There are a lot of CompTIA things that could be argued about...
Maybe that's true because it's something you want it to be. Nothing's stopping you.
Does it even really need an IP, or a physical connection? What about something like an Xorg display server, that apps on the same machine can talk to through a socket? It's surely just something that "serves" requests, but that maybe too broad and I guess they implicitly meant typical industry server hardware here.
CompTIA is technically correct. In the early 90s, engineers would do things like set up microcontrollers with just enough smarts to run an IP stack so they could telnet in to some or other TCP port to see how much a coffee pot weighed and thus, whether or not somebody needed to make a new pot. That's a server in action.
When introducing someone to server hardware, I use an analogy that covers most of the basics:
If a consumer computer is a pickup truck, then a server is a semi-truck.
You operate both of them in a similar way: steering wheel, pedals, fuel. You can use them to accomplish the same/similar tasks: hauling various loads.
But a semi-truck is driven a little bit differently; it has more gears; it can haul more; it is more resilient and can go longer without stopping. In some ways, it is built to be easier to maintain. But underneath all of the nuance, it has the same basic form and function as any consumer device.
There are also software nuances, but you can also run most server OSs on consumer gear anyway. So, I focus on the difference in hardware.
After 25 years of using desktop hardware for my home server (upgrading and replacing over the years), I finally put together a real-deal server. 2 CPUs, 72 threads, 96gb R ram (128gb on the next powercycle), 5 drives with 2 raid arrays, 4U case.
It's GLORIOUS!!
I grabbed be an ex-business CPU+MOBO+RAM combo from eBay (there a lot on there, and /r/homelab and /r/datahoarder have recommendations about which to use). Thinking about upgrading my 2cpu, 40 thread, 128gb RAM server to one of the Epyc options available, which can be got for as little as 1k for a 32c Rome + MOBO + 128GB RAM, since I now have 180TB of space.
To do what? Nothing worthwhile.
@@I_Am_Your_Problem hosting your own Netflix (Jellyfin), Google cloud (Next cloud), backups, surveillance. It's awesome. Also the learnings directly transferred to my job.
living the dream dude!
I moved to this about 4yrs ago. Got a Dell T110 series tower server. Its like a huge PC but almost silent to great for home use. And it has tons of drive bays, so was great to max out with storage and run ESX for all my VM's. At one point it had been on for 2.5yrs without being rebooted. Recently re-worked the whole thing and moved it onto Proxmox which was daunting at the time but now love it. Even seems more efficient at resource handling.
Super cool! Glad to help on this one.
ok.
Nice to you here dawg
what a coincidence.....!!! just came here after watching STH's "MAX Fanless 10Gbase-T Mini PC and Router" video to know that Patrick from STH has helped in this video....!!!
Also, hot swappable drives, dual redundant power supplies, ILO board (integrated lights out), multiple NICs, the list goes on.
Oh yeah, and lots of noise.
Totally.
@@MrPants-xy6db Maybe, but also maybe not. Some servers are water cooled. It depends partly on where they are installed.
@@christophernugent8492 I don't think that's very common in the server space to have watercooled servers.
@@JJFlores197 Google does. I have seen them first hand. I'm an electrician and worked for a company that built one of their data centers from the group up, then stayed on for 3 months for QC. I have also worked in other data centers. Theirs is on a whole other level.
@@JJFlores197it becomes more common most of our new hpc are actively watercooled our older hpc are passively watercooled
3 TB of RAM is crazy. But I still bet Google Chrome would find a way to use it all.
JVM would still complain of not enough ram.
Modded KSP probably
LTT actually did a video on how much system RAM can Google Chrome use. I think the OS stopped around 12GB of usage of RAM by just google chrome tabs. Right now Google Chrome seems to require more system RAM than when they did that video too but if you turn on performance in the browser settings then the system RAM required goes down a lot.
hey, that's only like 30,000 tabs
You can put a lot more
Love the fact that the green screen has the same style as years ago!
Probably a vid scheduled for the release earlier, before the hiatus post 😅
They sometimes put certain Videos are more side projects and may spend months on the back burner while being edited.
Where is that post ?
no more LTT content? NOO
@dariosantacruz the community tab on this channel
@@TKInternational76 LTT is staying, but TechQuickie, Mac Address and GameLinked are going.
The difference is function not form. We had a dozen old desktops which used to function as PCs that we started using as print servers (of sort...it's complicated). They were formerly developer desktops and, when they were too old to be used as such we just started using them as interfaces between mainframe and printers (again, it's complicated). Nothing in the HW was changed. The function was changed. I've used old PCs as servers as well at home with no or little HW change so again, it's function, not form.
The only difference between a PC and a server is its purpose. If it connects to other computers, it’s a client, if it accepts connections, it’s a server, it’s as simple as that.
I'd say redundancy plays a big part and only if you deploy servers in numbers. Servers can take decades worth of beating before falling over while desktop components will just fail taking whole thing down. Servers have a lot of redundancy built in. A NIC, PSU, random fan, stick of RAM, whatever dies, when the environment is set up right, just let the host shutdown, fail over, and swap out the hardware and reboot it. Sometimes you can even just hot swap most of it. Depending on how your cabling job in the back of the rack is lol
@@Baulder13I don't think that what you said is necessarily important for a PC to fit the definition of a server.
I connected an old 2014 cheap laptop to my router and forwarded some ports, I host a website, and a VPN, a password manager and other small services on it.
By all practical means this is a server, but it's not the big racks upon racks of compute power with the heaps of redundancy you imagine.
Yeah, I would say that the biggest difference is the software.
They should have specified they mean dedicated or high performance servers.
An old gaming PC, laptop or Raspberry Pie running some services are also servers, just not rack mounted, hpc machines.
Well... servers are designed for heavly I/O and a ton of RAM and disks
Server processors usually are 3-channel instead of 2 as a PC
Processors have lower clocks and much more cores, and uses a reliable architecture (client PCs usually uses the last architecture, and Xeon for example takes a long time to switch to a new one)
Motherboards and power supply are proprietary instead standard...
Usually they have iDRAC or some remote technology for accessing it... PCs for enterprise have vPRO/AMT, but is not the same OOB.
Video suggestion - maybe for the main channel: building a new NAS/streaming server without going crazy with custom scripts or unobtainable hardware.
A plug and play server without any customization needed? Well that's what NAS manufacturers sell ya. What's the point of a server then? 😅
Allow me to clarify.
"Configuration", has two aspects:
1. The hardware.
2. And software.
The same "PC" (Personal Computer) with the same hardware (although not ideal) can play the role of a "Personal Computer" or a "Server". So, if the hardware part of the configuration doesn't define the machine's role or purpose , then what does? The other part, the software.
Any computer, regardless of its platform (hardware + operating system) can be a server if it has a "server software" on it. Your smartphone can be a server.
That is the key point. A "server" is any computer that has a server software on it that is listening to incoming requests to "serve" (regardless of the requests nature: web, ftp, printing, etc.)
The hardware part is an optimization and efficiency factor not a role defining factor.
I hope this helps.
I mean technically server and pc are just applications, use-cases. Yes, a computer can be designed to run server or pc tasks more effectively, but you can also just use a pc as a server, for example a cheap Minecraft server for your buddies from an old office tower. Or perhaps picking up a decommissioned server to use as a pc, and just throwing a gpu in it. But, this was still probably the most thorough video explaining the differences yet.
dear linus, please push this over to the main channel to roughly educate your viewership on the difference between desktop > workstation > server > computing cluster.
(and while you're at it: a Mac is still a PC, as is a Raspberry Pi, or stone-age Personal Computers like a C64.)
I'm actually going to miss this channel.
What's happening?
@mossy_6475 They're retiring it and a few others, part of a new corporate strategy.
@@mossy_6475 Check the community tab, anything that says is all the rest of us really know.
@@abigaillilac1370 This one too??? Retiring MacAddress was sad enough
Kinda surprised this is the first video on this. Good for people to know.
Probably a video made before the hiatus notice?
"Damn, they don't know how to read"
WHAT hiatus notice??!?
@@QueMusiQTechquickie, GameLinked, and MacAddress are going on indefinite hiatus
@@shishsquared where you see that notice?
@@kmeanxneth the community tab in the channel
Definitely gonna subscribe for more tech quickies
This is my fav channel please dont stop❤
Techquickie is the only channel i have alwrts on for pls dont stop posting here
It’s been a fun journey. Hopefully this channel will still remain.
What? Why would this channel not remain anymore?
@@Dac_DT_MKDSee yesterday's community post. This channel is going on indefinite hiatus along with GameLinked and MacAddress
@@shishsquared nowhere did it say indefinite. Based on what it says it’s most likely temporary. I don’t see why they would get rid of techquickie.
A server is a device that primarily runs server software (not client software). So if you run a dedicated game server on an old laptop/raspberry pi that qualifies as a server because it's providing a service for the connected clients but this falls under home servers.
I think this video is aimed at explaining business or enterprise server configurations.
I feel like i've seen this from LTT before, but this way more concise!
7:16 - someone's getting fired in the morning!
Memory errors in gaming PC's are more common than most people think. A single bir error, what ECC can detect and correct, doesn't necessarily mean that a program will crash. If you look at a gaming PC the executable code in memory is not all that large. Most memory is used for graphical elements, sounds, game maps and so on. Code that isn't really containing executable code. If a single pixel in a graphical element is the wrong color or a single sample of a sound has a bit error there is little risk that the game or OS will crash. In a lot of cases you won't even notice anything more than a blip in the sound, a strange frame in a video or a single pixel in some texture that has the wrong color. The code running through the processor is pretty damn small in memory compared to all the other assets used by the OS, the programs you use or the game you are playing.
Now I did say that ECC can detect and correct single bit errors, and that's the bog standard ECC that's been with us for many decades. It will detect dual bit errors and more, but it can't correct them. A single bit error is something you will not notice when using a machine with ECC memory. All that happens is that there will be a note in the system log saying an error was detected and corrected. One such message is not worth thinking about. it's just ECC doing what it is supposed to do. A real problem is however there if you start seeing numerous ECC messages in the log. If they are occurring several times per second or so the log in Windows will just have a message that errors has occurred and been corrected but as they are so common reporting of them has been paused after some number has been reached. It's been years since I worked with this but I think it was in the two hundred errors before pausing. If this happens there is a real problem that should be addressed ASAP. The thing is that if a additional bit error occurs so there are two or more bit errors on the same read the OS will stop the machine as ECC cant repair the error.
So why does bit error occur? Not all bit errors need to mean there is a real memory problem. Most of them are simply generated as particles of cosmic energy trike a memory cell and manages to change the value, making a "1" a "0" or the other way around. This is just things that happen, and we really can't shield computers against them. These particles can go through the earth so a lead shied or something like that wouldn't make much difference. So we use ECC memory in servers to try to keep the bit's in order.
Now there are more advanced ECC and memory protection techniques. I remember that many years back some chip manufacturer designed a ECC based technique that would detect and correct up to dual bit errors. I can't remember the details though. There were also one that would map memory as unusable if it detected major errors in a certain memory segment. The intention was that if a machine crashed because of memory errors that the ECC couldn't correct it would map that memory as unavailable when the machine was restarted making it usable but with limited memory. I think it deactivated a whole DIMM at a time, but again this was a long time ago and I can't remember the details. Also this was way back when the chipset on the motherboard contained the memory controller. Unlike today the chipset manufacturers could invent their own ways to protect memory, something that they can't do today when the memory controller is integrated in the CPU.
Some really interesting solutions has been lost with the new modern processors. But the added performance is considered to be worth it. Today it's the processors that has to support ECC, something Intel is bad at doing for their desktop processors. The Ryzen processors from AMD is a bit better at this, but even then not all models will support ECC and a lot of motherboards also lack ECC support.
Beautifully presented with clear explanations. Thanks!
Hey that nuke plant is in my hometown, Berwick PA! 1:04
Great timing on the video :) I've recently become obsessed with homelabbing. Also, what's with the PowerPoint transition on 0:26 😭
That hiatus was quick
So this is the difference between data center and consumer not PC and server. It's easy to conflate the two. But they aren't the same. Anyone with a NAS or a wifi printer has a server on their home network. Server is just any machine that provides a service to other PCs on the network.
This is 100% enterprise server vs pc hardware, without that distinction, this is full of errors,
Not the first time LTT made a video that was wrong/misleading
There are a few points you completely left out.
POST: The big manufacturers build all of their servers and proprietary components--none of them will even accept you installing comparable parts, like DIMMs, they check the firmware at POST and if the DIMM doesn't have, for example IBM firmware, the system board will simply switch the DIMM off and report it. All components communicate with the system board and reports diagnostic information at every event.
System board have their own system-on-a-board management modules build in, which operate independently of the system board. The start before the system board starts, in fact they control the system board even starting and monitor the system board. These one board systems generally run on some proprietary Linux OS which starts up as soon as power is provided to the system board. Once they have started up, they start the system board, if configured to do so. They can also be configured to NOT start the system per default, or only if the system were shutdown correctly, so that a defective system which crashes cannot try to start itself again , until all events habe been fixed. This prevents servers from thrashing.
These system-on-a-board modules have different names, depending on the manufacturer and model and generation, but they all fall under the category of LOM (Lights Out Management), meaning that much of the maintenance of servers can be done over intranet connections, allowing to manage even shutting down and restarting a server, which used to require somebody to physically flip a switch on the server, but which can not be done electronically. The only reason anyone need to actually step foot inside a data center, is to replace physical parts, like disk drives or DIMMs.
The LOMs also allow for having the servers monitored by applications like HP OpenView, which monitors attached systems both passively and actively, and can be found in practically every professional data center in the world.
Those are three very major things which differs servers from PC in professional environments.
And you expected Linus or any of his team (who gave him information) to know this?
@@ganjasage420 I don't know why you have your panties in a bunch.
In professional IT, these aspects of server management are paramount.
What I expected has nothing to do with anything. Linus demonstrated that either he doesn't know about server management, or he didn't find it important enough to report on. In either case, as a professional with 35 years in IT, I presented what I thought was interesting and important.
@@DerOrso because I'm tired of people making Linus out to be some face for computer technology. Without a team this dude has no more knowledge then someone like me who plays around with computers for fun. He is certainly not a face for tech.
As I said, you expected them to know anything you know given your 35 years in IT? They could've asked people like you in the field for factual information, instead they make videos with incorrect information. Him and his team try to cover any aspect of computer tech with whatever jargon they find while having a narcissistic attitude in videos that they know better then the people in those fields as jobs. So yes I do have my panties in a bunch whenever I see his mug and hear him speak.
@@ganjasage420 I in now way disparaged Linus for not reporting on the points I filled in. I filled them in factually without comment on Linus at all, simply to fill out the points he didn't cover.
That you think you can exactly state what Linus knows and what not show only one thing, that you have no idea what you are talking about and are an unworthy interlocutor.
good stuff as always by linus, goat tech talker in town
Not only the hardware but also the software. I used to work for the USA's largest bank which has pretty much all of its processing here in the UK (yeah thats odd). Anyway they have the biggest DC in the UK which is the size of 3 football fields. It is insane. Every server runs a special core of Linux and is immeadiately absorbed into a massive computing pool once they come online. So those 2000 + physical servers are all effectively one. It was super clever and very interesting to work with.
What is interesting is laser fiber optics (Repeaters, fiber optic amplifiers) yes I took Photonics in college. Needs a specific power laser with IR lasers. It not cost effective or efficient but allows for fast internet. The history of fiber optic cables is neat too.
Your PC can be a server. Just depends on configuration of hardware and operating system. Also, intended use is part of what determines it is a server or not.
This is the sort of video you should be producing. Stuff for everyone.
Could you make a video on all the hardware that you would typically find in those enterprise server racks?
7:00 those passive C-Payne bifurcation boards are pretty nifty - i've used a similar one, a few years ago. Afaik, they're offering actively switched ones(ASMedia / PLX bridge) as well.
Awesome topic… I’m still waiting for the LTT Equinix vid to drop!
Love the way Linus butters us up for the Equinix video
The easiest way to do server for like home use or your own Plex, is a mini PC with an i5 10500t like hp elite desk which can be had for $120ish, then grab a das and fill it up with drives and their you go.
The pace of this video is 1.5x what my brain can comprehend. I know tech stuff but wow this was fast pace. So many things thrown at me at once. Love techquickie but slow it down for the future just a bit.
Another key difference is remote KVM and IPMI integration.
No one wants to stay close for long periods of time to these space heaters with turbines, or travel far just to turn it off and on again.
what a coincidence.....!!! just came here after watching STH's "MAX Fanless 10Gbase-T Mini PC and Router" video to know that Patrick from STH has helped in this video....!!!
TLDR, the software, that's it. The hardware configuration is completely dependent on use case, a workstation will be more like a server but in your office. A remote gaming server can use all gaming kit and be in a rack. There is no fundamental difference outside of software.
Fundamentally a server is a device that provides a service. If your smartphone shares a file with other phones it is a server. But we need devices to solely act in that role which is where the hardware mentioned in the video comes in. You can still have a pc and the home grade hardware work as a server, many do and linus has done vids with those builds.
I want LMG to do what is best for the company but I'll miss TQ! Looking forward to whatever you have planned though.
A server is designed to run multiple instances of a few programs while a PC is designed to run a a few instances of multiple programs.
I once built a headless Jellyfin server with old desktop parts and Linux. It's still going strong to this day.
The first PC I built 10 years ago, I repurposed a few years ago to be an ESXi server and runs a few VMs, including Plex.
4:40 speed chart. the further you go, the faster and more expensive it gets.
Hard Drives < 2.5 SSDs < NVME SSDs < Slow RAM < Faster RAM < L3 Cache in CPU < L2 < L1
that's all i know.
Architecture used to be different, too. x86 machines rule the roost now but most Unix and Unix-like boxes weren’t x86 until sometime after the turn of the century.
I have a uVAX II and ostensibly, it was a server at some point. But, I think the exception to the x86 might be IBM’s mainframes, now. Or ARM.
rip channel super fun, linus cat tips, mac address and techquickie
Please, can you do a video about full tower server cases? I'm looking to buy one for my next pc with s lot of drives.
When I started into computers. Mid 91. It would basically consist of Novell Netware 3.11 and the hardware was mostly the same as the workstations depending on the year. Normally you would have more ram, larger hard drives, and larger processor. Depending on the amount of drives you had. Probably a larger tower. When I left computers. We was building server using intel server boards with dual PIII, scsi drives, server case and ecc ram.
Putting aside all the semantics and the 'everything is a server if used as a server' narrative, the basic difference (which is mostly hard-coded in the motherboard hardware) is which subsystem utilizes the processor's most powerful input-output-bus. In a PC, it is the graphics subsystem, cause in a pc you have a screen where you want to have beautiful gaming and other graphicals for your money. In a server it is the data storage subsystem, as needed to serve incoming requests. That makes a PC a bad server, and vice versa.
Great episode. Here's something I have on my mind: when I was in highschool in the mid 2000s the school's computer lab had a server. Why?! I mean, all of the 20-30 computers in the room were normal PCs, just like ones you might have at home - meaning, they weren't some kind of a dumb terminal. So why did we need a server? Is it possible that it was just a misnomer (or old nomenclature) for some sort of a (non-wireless) internet router?
Your school most likely had a Windows server. With Windows Server, you can setup a Windows domain environment which is used in businesses and the enterprise and even in schools. In short, Active Directory lets you manage computers and user accounts from a centralized location. So if you think about it, you most likely had a unique username and password for yourself. And you likely were able to sign into any school PC with that username/password. That's possible thanks to Active Directory. All the computers are joined to that domain and can access resources that are made available to students. And if you think about it further, it would be impractical to create a user account for every single student on every single computer manually. That's one of the many reasons they use a directory service like Active Directory.
Its possible that server also acted as a file server for shared network drives or as a DHCP/DNS server among many other functions.
@@JJFlores197 Thanks a lot! I think it makes sense, although I don't remember us having to enter any username or password. However, access to files - yeah, that makes quite a bit of sense.
Loved the video but the volume was super high.. just came from watching another LTT video and this blew my headphones off
A server has a management interface (IPMI) to better access and monitor it remotely, even when the OS crashes.
Or you just don't want to sit next to the server in a loud and hot datacenter while doing firmware updates...
Not all of them.
HPE sells many servers without remote access as does Dell, Lenovo, etc.
the RAM part is why i would want to get old server stuff for cheap... having a RAM drive with games installed in is something i always wanted to try! (modern games that is, cause for old games even a M.2 is instant loading.)
Currently I am preparing to build a server for my homelab. Recently I bought Supermicro X11DPi-NT motherboard for 2 CPUs, want to shove there some higher clocked Xeon with many threads, so I can have beefy machine. Will serve as NAS, HTTP fileserver, NVR, some multiplayer game server (UT2004? :D) and in the future maybe a router based on pfSense or OpenWRT. Also for trying some AI based stuff, after I install Nvidia GPU later . Still waiting for some components to arrive though. Some time ago was thinking about purchasing an older DELL Poweredge R530 or something like that,, but with upgraded configuration, it could cost nearly the same as to build it myself.
I always like the comparison: A server to a desktop is similar to comparing a truck to a car. Essentially different form factor and better prepared for heavy duty usage.
4:16 I didn't guess it. 😔
Google used to run on desktop-grade PCs at the beginning in the 2000s thou, they saved a ton of money on that and were able to deploy across the globe.
Oh, hope he talks about server workstations. Been dreaming of owning one for various AI things. You know how some look at boats, cars, homes, etc they won't ever own but dream of?
Yeah, that is with me with server workstations since they cost 50-100s of thousands depending the custom build.
1:13 u guys missed a great spot to plugin the seasonic sponcership part. i literally expect the ad here. but nevermind
I used to be an IT guy at a non profit. We used AMD based 486 off the shelf parts repurposed from office computers in a regular case in our NOVELL 4 servers. They never went down. That was in the 90s.
Yhea server hardware is efficient in the data centre, but at home an older PC with a low end video card or a raspberry pi are fine to run a webserver or nas. Also the server hardware starts to become power efficient in the data centre, not so much at home. That is because in the data centre there are enough tasks to occupy server hardware. I don't need an EPYC based server at home (too much space, cost, and power usage).
RIP Techquickie 🙏
Much needed video
"Welp, there goes the cure for cancer", that researcher was pretty chill
Thanks for the video!
Biggest difference is servers are designed for workflow, crunching numbers etc but one main thing they tend to suck at is pc gaming, while a gaming pc has multiple functions the one think it excels at is pc gaming, but given the right specs it can be a Swiss Army knife, jack of all trades, easiest way to break it down, servers = work while gaming pc = playing games with the added benefit of doing other things also price is a big factor!
A server serves, a computer computes, mobiles are mobile, printers print.
They're generally named after their purpose.
A server is a software that offers a service another software (client) connects to and then uses. (In the context of IP networks: A sever would have an IP address and port.)
What kind of hardware you put under this service is up to the decision how important this service is and what is affordable.
Watching during a gaming break. I'm winning dad.
Thank you Linus for the video!
Now I can finally explain to non-tech people what I do at my job when I say “I sell servers…computers but industrial…”
I'm pretty sure that those clocks on server CPU as lower for a lot more reasons. I find the number at around 3.6 GHz quite constant even among consumer units.
Looking at the modern prebuilts at stores and the enterprise models, they look pretty similar underneath.
For me a Server is a system that can handle multiple users at the same time. That's it.
User 'requests', is a bit more accurate
Sure that sounds easy, but I just keep running out of USB ports for all those mice...
Unless you're NCIS. Then two people can use the same keyboard at the same time... 😮💨
That's a function of the software, though, not the hardware. A desktop PC from the 2000s can be put into service as a web server. It won't answer a million concurrent requests at reasonable speed, but it can definitely handle more than one at a time.
@@argvminusone Yes, exactly. The hardware can be optimal or not for the task, but it doesn't make a PC or a server. The software/OS does.
recently there's a push towards a compute density, even if it ruins the efficiency, in the AI race nobody considers the long term costs, but not having to open a new datacenter helps save a lot of time
Insightful. Thanks 👍
Considering mini PCs nowadays- are they any more space, efficient or power efficient then servers, if you had dozens of them?
Every PC can be a server, but not every server can be a PC
What happened at 7:17? Some random edit?
"What's the difference between a Server and a [personal computer]" is kinda answered in the name of them, tbh, but I'll watch either way.
My media server (unraid running Plex) just has ALL desktop components, but to keep power consumption down (and save space), it has a G series Ryzen cpu, so I dont need a GPU. It has an Asrock B550 phantom gaming 4 mb (lol) and 32GB Corsair Vengeance RAM, which is overkill, but whatevs.
Its all inside of a rackmounted 3U case. Been up-and-running for a little over a year w/ zero issues.
The Server is a Non Player Character, the PC is a Player Character trying to recover from a main quest battle.
Video suggestion - under a possible new "What's the difference Video Series???". As an old-school elder, I've never been a huge fan of mobile phones, but even to me, it's fairly common knowledge that you should "ideally" charge your phone from 20% to only 80% to help the battery last longer over time. So imagine my surprise, when my new Dyson Vacuum documentation recommends to me, to run the battery down to zero once a month or so, to help the battery last longer. It immediately sounded like a scam to me not knowing a lot about battery life - so you have to buy a new Dyson replacement battery when the old one dies, due to running it from 0% to 100% too often. But if it's NOT? a scam - then how about a video that compares the differences between PHONE / VACUUM / & SAY EV CAR BATTERIES, especially from the point of view of trying to extend their life as much as possible, and how that might differ from one type of battery to the other?
This video should have been titled, what's the difference between server and pc hardware. Old pc hardware can be used for a server. The difference is function, not hardware.
I would disagree partly on servers you run 24/7 you most likely want management interface and serial interfaces
03:40 A.I researchers can you please add this layer already? We're ready for them to become conscious!
I was expecting baseboard management controllers to also be a topic.
YES I NEED TO LEARN ABOUT THIS TOO!
There are also servers that can run on desk for.. consumers and small businesses. I think today many servers can be PC, and many "PC" isn't really PC, it is more like walled garden console with all kind of stupid restrictions.
For me, PC is Personal Computer. I can bought it to home, and I control it completely.
do videos about diff types of server motherboards or how to choose build for diff types of server
No, they are not more reliable but failure safe. That is why they have mechanisms of working past failed components (be them as reliable as they are or not), e.g. two PS, RAID, etc.
I would love a tech quickie focused on NPUs.
No, not the AI ones. The network processing units. Those big bulky boxes for data centers focused on their networking
No mention of fault tolerance in parts, multiple power supplies, raid cache batteries, raid controllers and disc arrays with multiple redundancy.
Also no mention of IPMI or iLo(HPE iLo is actually amazing for remote management).
Double ported enterprise storage so that there's basically two computers attached to it in one enclosure, so that even if the server completely craps the bed it can still keep on ticking with the other half...
None of these are requirements for a computer to be considered a server.
HP, Dell and many other vendors sells server devices with... none of them. The bottom third of the Proliant series contains none of them.
What on earth is a "disc array"? Is that when you have multiple DVD burners in RAID or did you mean to say "disk" with a k?
Stop. Think. Post.
@@tim3172 Yes. Stop. Think. If you'd done that, you'd not have posted.
Soon we'll be seeing videos in a tiny line. Why not keep the resolution adapted to existing screens? It's the same with movies. We have 16:9 screens and they film with ultra-imposing black bars. I don't understand the point of filming between 2 black lines... :)
pls make a video where you explain the different PCIe connectors like u.2 e3.s and their SFF names.. this is confusing if u don't know what is what
As someone with six of those servers, I could confirm, they are pretty sweet, but pricey to run and buy and maintain😂