Btw if you watching this and don't know what 1U means, is the amount of unit taken in the rack, bigger servers will take 4 units. Great video as usual thanks :) P.S :use an audio editor and remove the frequency of the background noise, is an higher freq than the voice, should work nicely :)
Thank you very much for remembering the powerful HP Proliant G7 servers. They are very noble servers that are currently in production in companies. Greetings from the Aztec land in Mexico City. The big Techno-titlan.
When you need to unrack a server in order to repair it, how do you ensure all of its workload gets transfered to another server so no interruption happens when you shut it down? Do you have an automated feature like this in usual hypervisors like Proxmox or VMware?
That's just load balancing setup, usually they don't go into the trouble of redirecting prior to pulling the server, but if server takes x seconds to respond your request may fail or there will be another attempt to serve your request by taking it to a separate server. But the main idea is your request goes to the load balancer, that decides which server will process it, if it's down, it can just plain fail or be "redirected" to another server.
@@claytoncoleiro7190 thanks for your reply. I imagine, at the end it's the app's responsibility to implement a proper health check, and at the moment it fails (due to app beign killed or server malfunctioning), the orchestrator will spin up a new instance
@Wolf ElectronicS If the server is in working condition but just need to be upgraded/repaired by changing some part or whatever, in VMware case you can use something called vMotion - it can migrate a live/powered on virtual machine from one host to another but there are some requirements to make that happen fast, shared storage is a first one although it's not required but recommended so that live migration goes faster, second thing is VMware vCenter Appliance/installation which actually makes vMotion possible. Dunno about other hypervisors, did not have much work experience with them. Most of applications have a load-balancer of some sort that allows to just share the load between multiple instances of such software or just High Availability software which enables to switch between instances in case of a failure of the main instance. Such things are e.g. VMware HA (high availability for vmware clusters) or HP Serviceguard for Linux.
Great video Ash! Interviewing at the moment for entry-level positions, these are great for getting an idea of the day to day life. Would you consider doing an in-depth video on troubleshooting components for customer servers? Your thought process and how you would solve it would be great info.Cheers mate
Hi guys! I really enjoy every video. As an Sr. Infra Admin I know how is the life working in there, every day, and I love it!!! Thanks for sharing and keep going! (I wish to work with You 💪🏼👨🏻💻)
There's a fair bit of incorrect information in this video, like the fact that CPUs (at least in this particular server) are not hot swappable. Only thing like mainframes or very high end servers can have CPUs hot swapped.
Servers have up to 4 PSUs in an N+1 or N+2 configuration (Up to 2 hotswappable PSUs in midrange servers and 4 PSUs in quad-CPU servers). They have also remote management capabilities built-in or in an add-on card (like generic IPMI/Dell iDRAC/HP iLO/Fujitsu iRMC/IBM Remote Supervisor Adapter). Servers have less points of failure/fault-tolerant than standard desktops or workstations. It is because of: *Redundant hardware configuration (PSUs/RAM modules/CPUs) *ECC memory *Remote management capabilities *Hardware RAID setup
Filming 101, less background noise, you could have done this in a meeting room. If you want to keep the viewer's attention, you need as little distraction as possible, and a room of servers running full blast is very distracting...very great content, just wish I could hear it better :P
A room full of noise where you will be working, and you say its very distracting? So you're essentially saying to everyone that you find the place that you will work in a distraction from your task? Maybe you should change your career. Also, the white noise is soothing to a lot of people which makes it less distracting than a completely silent room, with just a boring voice blaring info. Go back to your filimg career.
Hi, I have 2 questions for you: 1. Would you advice running Fiber or copper between the switch in the rack and the server? 2. Can you please do a video and explain how your fire system works in the data center. Thanks
Can I ask: How many servers are there per rack? I see you mentioned a server can hold a half a terrabyte of RAM. I'm really interested in how much physical space the cloud might take up now and in the future. Would really appreciate any help, thanks!
This is a late answer to your question but maybe I can help. I work in a large enterprise DC for a fortune 200 BIG Pharma. Most racks are 44 RU in height, The server he was working on is 1 RU. So in theory you can rack 44 of those in to a rack. That however is rarely the case. Things to consider is power consumption for the MOAs and airflow/heat. With regards to cloud computing, most DCs are moving toward high density computing. (One physical server with 100 virtual servers running on the one physical) Need a new server? flip a switch. BANG new VM. Hypothetically speaking. 1 rack with 44 physical servers with lets say 20 VMs. That's 880 VMs in one rack. Keep in mind this is all hypothetically speaking. Not going to talk real numbers.
Would appreciate seeing you partner or showcase with a host, to show what you mean for them and what they do, and what you do for them in the Datacenter. Would be nice to have something based in game or web hosting!
this is awesome. what raid is most popular in data centres? this heat on your CPU looks weak. what's most popular core count and clock speed of DC CPUs?
From my experience it will be RAID 5/6 in case of storage arrays and RAID 1/10 in case of local storage for servers. About the CPUs, most of servers for hypervisors that I install have two 10-core CPUs with HT, usually between 2.4 to 2.8 Ghz base clock. Obviously in case of super new servers like the ones with AMD Epyc you can get as many as 64 Cores in a single CPU.
The heatsink might look weak, but consider the 10000+rpm fans that push air across it. When I start mine up they sound like a jet taking off, as you can hear in the background. Core counts actually depend more on what you run. Some software like databases, etc. have licensing structures that encourage fewer very fast cores, while a lot of web stuff is done with many slower cores - for a million requests a day you just need a shedload of threads running at once, even if they're not the fastest things ever.
hi guys! you have a great content going on here, hope you keep it coming. it might be simple for you guys cause you see it everyday but for some its gold. keep it up! i would like to see how you build up your monitoring, as mentioned in previous video you guys made it from scratch for your own liking. but i think there would be sensitive data, i hope its possible. more power to you guys!
What's the purpose of the battery on the RAID controller and what would happen if it was removed, our Dell servers at work don't have such batteries on the RAID cards. Surely the RAID and related disk configuration would be stored on non volatile memory. Also, you forgot to mention about out-of-bound management systems such as Dell's iDRAC or HP's ILO that servers often have.
Battery is for write-cache memory. The idea is that on some RAID levels, writing data to disk takes time so it's buffered. Battery prevents data loss in case of power loss (data is stored until the system is started again)
The onboard battery backup stores the current write to NAND before it shuts down. In this way in ensures that atleast the last write is completed when the server is powered back on, thus ensuring that you could resume somewhat safely with any previous write without the risk of file corruption etc. Generally speaking its just best practice to have a BBU with your RAID card.
That is a battery backup unit (BBU). This powers the cache in an event of a power failure, so that the cached data will be written to the disk and not lost because the RAID cache is composed of RAM which is volatile (data will be lost if power is cut off). The BBU in older RAID controllers use Li-ion batteries. Now they use supercapacitors.
When comparing to a desktop computer a server is headless (no monitor/keyboard/mouse) and often diskless. For headless, mention HP iLO or Dell iDRAC. For diskless, talk about boot options (boot from HBA/SAN, SD, USB, ethernet network or disk).
I really enjoy the content of your channel , next time perhaps try to use a personal mic so it can isolate your voice. A lot of ambient noise makes your explanaition really difficult to hear. :D
Strange seeing a datacenter company broadcast themselves this way, given it seems like they may have a director or multiple people behind the camera- I am guessing someone from corp? Nonetheless good content, wonder if this place is N2
If a component fails in a server, does the server report this as an alarm to the NOC or the Customer, or both? And how do the alarms work? Does the server send SNMP alarm traps to the NOC with a description of the alarm condition?
Given how many clients have faulty equipment in our data center for months, nobody seems to care so it doesn't matter here. Maybe in America people care.
You can not broadcast a website. You can only broadcast within a broadcast domain. So you'd be able to broadcast to the computers in your local network that are attached to the same router, but you can't broadcast to the internet. Also, broadcasting has nothing to do with DNS servers. Do you possibly mean host? Of course you can host without a DNS server, but user will have to enter the IP of your server.
@@KhandkarAsifHossain you can attach a port to it. If you don't, your webbrowser will default to port 80(for HTTP). So, If your webserver runs on port 80, you don't have to specify the port in your browser.
@@jort93z Thanks again. Now that I have connection to my server via internet. Can I use this server as an application server / Database server to give aacess to both android app and web app?
Not sure if you mis-spoke, but CPUs are not hot-swappable. Hot-swap means it is replaceable while the server is powered on. Things that are hot-swap would be the power supplies (as long as they are redundant), hard drives, and fans. CPUs, RAM, and PCI devices are not hot-swap.
Cpus no my college teacher did have a old server in the classroom thats specs said hot swap ram never got to see how well it worked. Not sure if tye pic was hot swapable I thought I heard of it b4 but cant b completely syre
Does HP / Dell Server have a noisy fan when its in idle position ?, I have an IBM X3650 M1 server its obnoxiously loud even the server in idle position.
The audio is a disaster. Next time take some B footage of the hot aisle and just green screen the demonstration in a quiet room. The first rule of video production is always: people will forgive you for bad video but never for bad audio. Think TV people reporting on bad weather... The audio is always perfect and they don't care if the video is blurry or out of focus.
do most 2.5" are hdd in these or ssd? and why not go the route of 4 3.5" drives as they are more reliable and have much more storage? and what prices do 2.5" drives go for as I worked at company and the price was bonkers, like the vendor charged near 600$ for 500gb hdd.
this hot aisle isnt that bad... my basement rack alone is louder..... cant get the heat out that easily so i need to run it a 30C ambient... fans 90-100...
Great video, I would highly suggest for these more information dense videos to write a bit of a script, you hop concepts a bit and for someone that doesn't know anything about it they basicly can't follow at alll.
Ash, man you typically step out when you are changing topics. (For me) it would have been better to walk out of the hot room (rack area) This would have really personified the difference in between the decibel level inside and out. Hence, giving the viewer a much clearer contrast in the different noise levels
I’ve watched and liked a lot of your videos but had to cut this short. Way too much rambling and tangents, maybe a script is needed for a multi faceted item like a server. Still enjoy the other videos though!
All U's are connected to a router called a TOR (top of rack). These TOR's are called a T0 switch that is about the size of 1U. Now these servers are in rows as he explained, somewhere in this row is another switch, an every bigger and more spicy one, to compare to the 1U size switch, these are about 6U or even bigger. This would be a T1 switch. Now take all of these T1 switches and send them to another switch where all T1's data comes together, the T2. These can handle so much data traffic its insane. Usually there is an entire row required due to their size, and hold multiple T2s. This basically is the internet access point for the colocation. Now in some cases these are then connected to a network room, or a regional gateway, from these rooms data leaves the datacenter and goes into the global network There is ALOT going on in these network rooms, to much for a simple youtube comment :)
you guys are aware that you can record the video and add a voiceover after right? it's so loud over there, that your audience cant understand anything and with all the bg noice and it's hard to focus on what you guys are saying.
Sorry i'm french and i don't understand all in this vidéo In this server, you have 2 CPU, it's because he work on raid mode? (sorry for the eventual faults, i try to do my best, please correct me, thanks)
@@aidanthetaylor for crying out loud an old Ivy Bridge E5 Xeon can take 768GB for cpu that's 1.5TB for a dual socket configuration....newer Xeon can take a Lot more ram
@@aidanthetaylor E5's and E7's sure, E3's only started catching up post Ryzen. However, he does mention that you can get more with other configs in the very next sentence. Hell my motherboard and CPU combo can do 1.5TB or more if I allow it.
concerning the server failure alarms (at least 2) are on in the background of this video, and shame no wrist strap! Out of date understanding of technology too.
Server alarms are normal from out-of-date firmware to failed disks. They just get ignored until someone's at the DC next. Wrist straps aren't typically used as the entrances for these rooms have anti-static mats, plus the flooring is designed to avoid any ESD all together.
James Schmidt you don’t see alarms being ignored when detected in any of the data centres I work in. Poor work style by this gentleman in multiple videos
@@crazyboy2006cashier Dude, did you even watch the video. He literally showed the wriststrap and explained why he didn't have to wear one. 7:55 And that is not a server of a customer, but an old one of theirs. 1:00
Whats the point of making this tutorial video?! The noise is so disturbing! The instructions and the way you explain things is so nonchalant! You dont enjoy making this video. Youre all over the place. Advice, plan your tutorial video. Make a script! Show the parts, even if its an intro to servers! Just for the hell of it!? Def not subscribing😤
Btw if you watching this and don't know what 1U means, is the amount of unit taken in the rack, bigger servers will take 4 units.
Great video as usual thanks :)
P.S :use an audio editor and remove the frequency of the background noise, is an higher freq than the voice, should work nicely :)
Yeah please do
Also, remove the servers in the background that aren't working correctly. Fan noise should be reduced by changing the microphone setup.
@@user2C47 Those are most likely live servers and can not be taken down without the customer's approval. Also, they sound like BBUs failing.
@@Redal24 how do you know
@@Knightfall23 i am a data center engineer
looking forward to the hyper-visor build
I'm so excited for the next hypervisor video. I was hoping there would be more advanced stuff and u guys never disappoint. Thx for the great vids!
one of the best explanation...if you are fresher you can get a lot to learn from this video
Thank you very much for remembering the powerful HP Proliant G7 servers. They are very noble servers that are currently in production in companies. Greetings from the Aztec land in Mexico City. The big Techno-titlan.
techno-titlan XD?
When you need to unrack a server in order to repair it, how do you ensure all of its workload gets transfered to another server so no interruption happens when you shut it down? Do you have an automated feature like this in usual hypervisors like Proxmox or VMware?
That's just load balancing setup, usually they don't go into the trouble of redirecting prior to pulling the server, but if server takes x seconds to respond your request may fail or there will be another attempt to serve your request by taking it to a separate server.
But the main idea is your request goes to the load balancer, that decides which server will process it, if it's down, it can just plain fail or be "redirected" to another server.
@@claytoncoleiro7190 thanks for your reply. I imagine, at the end it's the app's responsibility to implement a proper health check, and at the moment it fails (due to app beign killed or server malfunctioning), the orchestrator will spin up a new instance
@@WolfElectronicS nah, a well designed app shouldn't handle it. Thats a load balancer job. It depends highly on configuration
@Wolf ElectronicS If the server is in working condition but just need to be upgraded/repaired by changing some part or whatever, in VMware case you can use something called vMotion - it can migrate a live/powered on virtual machine from one host to another but there are some requirements to make that happen fast, shared storage is a first one although it's not required but recommended so that live migration goes faster, second thing is VMware vCenter Appliance/installation which actually makes vMotion possible. Dunno about other hypervisors, did not have much work experience with them.
Most of applications have a load-balancer of some sort that allows to just share the load between multiple instances of such software or just High Availability software which enables to switch between instances in case of a failure of the main instance. Such things are e.g. VMware HA (high availability for vmware clusters) or HP Serviceguard for Linux.
All I can think of is the "this is fine" meme with Ash casualty describing the server while alarms are going off in the racks around him. 🤣
You guys are great! Love your channel and remarkable content. Thank you
Great video Ash! Interviewing at the moment for entry-level positions, these are great for getting an idea of the day to day life.
Would you consider doing an in-depth video on troubleshooting components for customer servers?
Your thought process and how you would solve it would be great info.Cheers mate
You need a workbench outside the DC (for the videos :)
no, having the background setting inside the actual data centre makes it better
@@KaesOner Not for my ears
Great video :) I just learned about RAID a couple days ago, so was nice hearing a little about a real world application.
Hi guys! I really enjoy every video. As an Sr. Infra Admin I know how is the life working in there, every day, and I love it!!! Thanks for sharing and keep going! (I wish to work with You 💪🏼👨🏻💻)
There's a fair bit of incorrect information in this video, like the fact that CPUs (at least in this particular server) are not hot swappable. Only thing like mainframes or very high end servers can have CPUs hot swapped.
Thank you for outstanding audio. Perfect place to record. :/
Servers have up to 4 PSUs in an N+1 or N+2 configuration (Up to 2 hotswappable PSUs in midrange servers and 4 PSUs in quad-CPU servers). They have also remote management capabilities built-in or in an add-on card (like generic IPMI/Dell iDRAC/HP iLO/Fujitsu iRMC/IBM Remote Supervisor Adapter). Servers have less points of failure/fault-tolerant than standard desktops or workstations. It is because of:
*Redundant hardware configuration (PSUs/RAM modules/CPUs)
*ECC memory
*Remote management capabilities
*Hardware RAID setup
Filming 101, less background noise, you could have done this in a meeting room. If you want to keep the viewer's attention, you need as little distraction as possible, and a room of servers running full blast is very distracting...very great content, just wish I could hear it better :P
A room full of noise where you will be working, and you say its very distracting? So you're essentially saying to everyone that you find the place that you will work in a distraction from your task? Maybe you should change your career.
Also, the white noise is soothing to a lot of people which makes it less distracting than a completely silent room, with just a boring voice blaring info. Go back to your filimg career.
Hi, I have 2 questions for you:
1. Would you advice running Fiber or copper between the switch in the rack and the server?
2. Can you please do a video and explain how your fire system works in the data center.
Thanks
Copper suffices in most cases unless you want really high speeds
Can I ask: How many servers are there per rack? I see you mentioned a server can hold a half a terrabyte of RAM. I'm really interested in how much physical space the cloud might take up now and in the future. Would really appreciate any help, thanks!
This is a late answer to your question but maybe I can help. I work in a large enterprise DC for a fortune 200 BIG Pharma. Most racks are 44 RU in height, The server he was working on is 1 RU. So in theory you can rack 44 of those in to a rack. That however is rarely the case. Things to consider is power consumption for the MOAs and airflow/heat. With regards to cloud computing, most DCs are moving toward high density computing. (One physical server with 100 virtual servers running on the one physical) Need a new server? flip a switch. BANG new VM. Hypothetically speaking. 1 rack with 44 physical servers with lets say 20 VMs. That's 880 VMs in one rack. Keep in mind this is all hypothetically speaking. Not going to talk real numbers.
Hello,
Regarding ideas for next videos, a short video on cable management would be great 🤓
Just found your channel. Fascinating stuff!
I haven’t even watched this video yet but thank you for this video ! 😎🤝
What raid config are they using?
Would appreciate seeing you partner or showcase with a host, to show what you mean for them and what they do, and what you do for them in the Datacenter. Would be nice to have something based in game or web hosting!
James,James,James!!!
this is awesome. what raid is most popular in data centres? this heat on your CPU looks weak. what's most popular core count and clock speed of DC CPUs?
From my experience it will be RAID 5/6 in case of storage arrays and RAID 1/10 in case of local storage for servers. About the CPUs, most of servers for hypervisors that I install have two 10-core CPUs with HT, usually between 2.4 to 2.8 Ghz base clock. Obviously in case of super new servers like the ones with AMD Epyc you can get as many as 64 Cores in a single CPU.
The heatsink might look weak, but consider the 10000+rpm fans that push air across it. When I start mine up they sound like a jet taking off, as you can hear in the background.
Core counts actually depend more on what you run. Some software like databases, etc. have licensing structures that encourage fewer very fast cores, while a lot of web stuff is done with many slower cores - for a million requests a day you just need a shedload of threads running at once, even if they're not the fastest things ever.
I have a DL360 for my local fileserver, i've got 8 600GB netapp drives in RAID6 for 3.6TB total.
"If there's a blame, there's a claim" hahaha
i'm so enjoy your channel and i hope i can see more and more your video in future
hi guys! you have a great content going on here, hope you keep it coming. it might be simple for you guys cause you see it everyday but for some its gold. keep it up!
i would like to see how you build up your monitoring, as mentioned in previous video you guys made it from scratch for your own liking. but i think there would be sensitive data, i hope its possible. more power to you guys!
I'm halfway through the video and you haven't told me about any components save the lid and the lever for the lid your blue balling on me here
well you missed it, he goes on to mention the thermal paste
What's the purpose of the battery on the RAID controller and what would happen if it was removed, our Dell servers at work don't have such batteries on the RAID cards. Surely the RAID and related disk configuration would be stored on non volatile memory.
Also, you forgot to mention about out-of-bound management systems such as Dell's iDRAC or HP's ILO that servers often have.
Battery is for write-cache memory. The idea is that on some RAID levels, writing data to disk takes time so it's buffered. Battery prevents data loss in case of power loss (data is stored until the system is started again)
The onboard battery backup stores the current write to NAND before it shuts down. In this way in ensures that atleast the last write is completed when the server is powered back on, thus ensuring that you could resume somewhat safely with any previous write without the risk of file corruption etc. Generally speaking its just best practice to have a BBU with your RAID card.
Why not just place a super cap or button cell battery on the raid card instead.
@@ElliottVeares , they do. A battery or a capacitor bank. Still needs logic to make it work properly.
That is a battery backup unit (BBU). This powers the cache in an event of a power failure, so that the cached data will be written to the disk and not lost because the RAID cache is composed of RAM which is volatile (data will be lost if power is cut off). The BBU in older RAID controllers use Li-ion batteries. Now they use supercapacitors.
When comparing to a desktop computer a server is headless (no monitor/keyboard/mouse) and often diskless. For headless, mention HP iLO or Dell iDRAC. For diskless, talk about boot options (boot from HBA/SAN, SD, USB, ethernet network or disk).
Don't forget servers that are nothing _but_ disks.
I really enjoy the content of your channel , next time perhaps try to use a personal mic so it can isolate your voice. A lot of ambient noise makes your explanaition really difficult to hear. :D
Great video. Informative. For noobs like me. Keep em coming guys
RAID 10 is a mixture of both because it's RAID OneZero, not Ten :D
Strange seeing a datacenter company broadcast themselves this way, given it seems like they may have a director or multiple people behind the camera- I am guessing someone from corp? Nonetheless good content, wonder if this place is N2
If a component fails in a server, does the server report this as an alarm to the NOC or the Customer, or both?
And how do the alarms work? Does the server send SNMP alarm traps to the NOC with a description of the alarm condition?
Given how many clients have faulty equipment in our data center for months, nobody seems to care so it doesn't matter here. Maybe in America people care.
Probably an SNMP alarm, since monitoring systems and sensors use SNMP
Awesome video.
Is it possible to broadcast a website without DNS server just with an IP?
You can not broadcast a website. You can only broadcast within a broadcast domain. So you'd be able to broadcast to the computers in your local network that are attached to the same router, but you can't broadcast to the internet. Also, broadcasting has nothing to do with DNS servers.
Do you possibly mean host? Of course you can host without a DNS server, but user will have to enter the IP of your server.
@@jort93z Thanks for replying. When I put the ip in the browser do I have to attach a port to it?
@@KhandkarAsifHossain you can attach a port to it. If you don't, your webbrowser will default to port 80(for HTTP).
So, If your webserver runs on port 80, you don't have to specify the port in your browser.
@@jort93z Thanks again. Now that I have connection to my server via internet. Can I use this server as an application server / Database server to give aacess to both android app and web app?
You can use normal thermal paste, you just have to pick one that has TTR (thermal transfer ratting, higher the better)
Very cool video.
Not sure if you mis-spoke, but CPUs are not hot-swappable. Hot-swap means it is replaceable while the server is powered on. Things that are hot-swap would be the power supplies (as long as they are redundant), hard drives, and fans. CPUs, RAM, and PCI devices are not hot-swap.
Cpus no my college teacher did have a old server in the classroom thats specs said hot swap ram never got to see how well it worked. Not sure if tye pic was hot swapable I thought I heard of it b4 but cant b completely syre
@@mousejjt2 Never heard of hot swap ram. Current servers don't have that, i know that much.
@@1960bosman Oh, nice to know
I’m running that exact server in my homelab
Does HP / Dell Server have a noisy fan when its in idle position ?, I have an IBM X3650 M1 server its obnoxiously loud even the server in idle position.
@@sonicthedgehog9473 For my HP server (depending on the power profile and cooling profile) my server can be almost unnoticeable.
Great video. Thumbs up
Loving the video
Most vulnerable video ever,I have seen
The audio is a disaster. Next time take some B footage of the hot aisle and just green screen the demonstration in a quiet room. The first rule of video production is always: people will forgive you for bad video but never for bad audio. Think TV people reporting on bad weather... The audio is always perfect and they don't care if the video is blurry or out of focus.
love the non-obligatory dying server noises barely audible over fan noises
Really a good video
Great video
I wish I could see your edge routers and calling configuration
do most 2.5" are hdd in these or ssd?
and why not go the route of 4 3.5" drives as they are more reliable and have much more storage?
and what prices do 2.5" drives go for as I worked at company and the price was bonkers, like the vendor charged near 600$ for 500gb hdd.
You getting that US healthcare level of pricing?
@@BichaelStevens it's waaay cheaper than US healthcare.
For a beginner, u just messed up my brain
this hot aisle isnt that bad... my basement rack alone is louder..... cant get the heat out that easily so i need to run it a 30C ambient... fans 90-100...
Great video, I would highly suggest for these more information dense videos to write a bit of a script, you hop concepts a bit and for someone that doesn't know anything about it they basicly can't follow at alll.
CAN YOU SPEAK UP I CANT HEAR YOU OVER THE BACKGROUND NOISE
Dude, he's recording from inside a DC. Chill out
CAN YOU TURN OFF THE FAN OR AIR CONDITIONER? YOU HAVE A LOT OF BACKGROUND NOISE! 😂
@@jaywalkra Actually in my opinion they don't need to record inside the DC especially this video and simply put DC image on the background :-)
Ash, man you typically step out when you are changing topics. (For me) it would have been better to walk out of the hot room (rack area)
This would have really personified the difference in between the decibel level inside and out. Hence, giving the viewer a much clearer contrast in the different noise levels
I’ve watched and liked a lot of your videos but had to cut this short. Way too much rambling and tangents, maybe a script is needed for a multi faceted item like a server. Still enjoy the other videos though!
Like to see a hot swapping CPU action while the server is running (that's hot swap I guess)
no, that proliant processor replacement is NOT hot swap, only fan, psu, disk are hotswap
you cant do hot swap while replacing cpu. u need to unrack the server to do it.
I just decommissioned about 10 pallets of this model server. A small form factor workhorse.
Would love to understand how does a 1U server that is colocated gets access to the internet.
Like how does networking work?
All U's are connected to a router called a TOR (top of rack). These TOR's are called a T0 switch that is about the size of 1U.
Now these servers are in rows as he explained, somewhere in this row is another switch, an every bigger and more spicy one, to compare to the 1U size switch, these are about 6U or even bigger. This would be a T1 switch.
Now take all of these T1 switches and send them to another switch where all T1's data comes together, the T2. These can handle so much data traffic its insane. Usually there is an entire row required due to their size, and hold multiple T2s. This basically is the internet access point for the colocation.
Now in some cases these are then connected to a network room, or a regional gateway, from these rooms data leaves the datacenter and goes into the global network
There is ALOT going on in these network rooms, to much for a simple youtube comment :)
next please make video about router and switches, firewall..
Anybody knows the name of the intro song? If so can you please tell me? :)
Think about shooting in quite environments the next time
@The Lavian The control room, the offices, and the main lobby are quiet environments.
Welcome to data-center life...
Using a headset microphone would help also. (Half the distance = 4 times the signal strength!)
Noticed you cable management isn't very tidy 😉
was hoping someone commented on it lol
Oh, im probably the kind of person to cause people to wear ear protection in ovh’s datacenter
thanks
Shout out kay Owa Luis Paraguison!
I don't think those fans are hot-swappable because they are blue ones.
you guys are aware that you can record the video and add a voiceover after right? it's so loud over there, that your audience cant understand anything and with all the bg noice and it's hard to focus on what you guys are saying.
It feels good to hear those background buzzes
It’s very easy to hear him
You must be deaf AF
@@TheRealJohnMadden maybe but I blame your ma ma for that, she knows how to scream
@@edgargarcia209 that’s your fault, shouldn’t be hitting it so good
Please turn off the noisy equipment before shooting your next video. It can't be that important can it?
I don't believe so, shutting everything down for the video does sound nice, maybe they'll do it next vid
Sure, would that be for here or to go?
To those that didn't get it. He was making a joke. Of course you can't shut down a live datacenter
hahhaa youd shut the whole business if you do that. he prolly needs to move shooting location.
Sure Ivar just shut down the entire data center and tell clients that you'll be back in a jiffy!
Sorry i'm french and i don't understand all in this vidéo
In this server, you have 2 CPU, it's because he work on raid mode?
(sorry for the eventual faults, i try to do my best, please correct me, thanks)
As far as I understand, servers will have multiple CPUs for processor load balancing/optimization and for redundancy if one of them fails.
HP DL380 Gen 9 will fall over (Stop working) around 40 degrees C.
That's the air intake temperature.
for the love of god change that ups battery
are you an MB (tm)? :)
thats probably a FWBC, not BBWC
Why did they make you film in the hot isle🥵
👍
A good tip, maybe show us a server that isn't like outdated 20 years ago :(
background fan noise killed your video dude.
It's a Data Centre...... Not a quiet room.
Not so bad, feel authentic server environment :) But he could certainly try to use software for noise suppression in the audio.
Yes, they should turn off the whole data centre for a better video experience for you :)
Yo
Awe how cute, a flat PC
Hi
This guy has terrible add
hes a person talking about what he likes, not a news anchor stfu
Read this literally as something disturbing was forced in my face
'Limited to 32GB of RAM' Jesus where is this guy from? Early 2000's. Catch up dude we can get desktop CPU'S that run 128GB on a single chip
Its still correct for most single socket CPU's in the server space. Mainly Intel Xeons.
Early 2000's would be 4GB. Hard to find 64-bit in those days.
@@pederstrmKollenborg it's not been correct for quite a number of years. All modern Xeons can handle 64GB+.
@@aidanthetaylor for crying out loud an old Ivy Bridge E5 Xeon can take 768GB for cpu that's 1.5TB for a dual socket configuration....newer Xeon can take a Lot more ram
@@aidanthetaylor E5's and E7's sure, E3's only started catching up post Ryzen. However, he does mention that you can get more with other configs in the very next sentence. Hell my motherboard and CPU combo can do 1.5TB or more if I allow it.
concerning the server failure alarms (at least 2) are on in the background of this video, and shame no wrist strap! Out of date understanding of technology too.
Server alarms are normal from out-of-date firmware to failed disks. They just get ignored until someone's at the DC next. Wrist straps aren't typically used as the entrances for these rooms have anti-static mats, plus the flooring is designed to avoid any ESD all together.
James Schmidt you don’t see alarms being ignored when detected in any of the data centres I work in.
Poor work style by this gentleman in multiple videos
@@crazyboy2006cashier These are customer racks, DC staff don't have permission to solve issues, some don't even have permission to report them
James Schmidt that’s fine but why then are they opening their customers servers, could have picked a room with no alarms.
@@crazyboy2006cashier Dude, did you even watch the video.
He literally showed the wriststrap and explained why he didn't have to wear one. 7:55
And that is not a server of a customer, but an old one of theirs. 1:00
COVID started because you coughed into your hand
Ya gotta keep that hand sanitizer handy IJS
scratch body dust into servers.... great.
Whats the point of making this tutorial video?! The noise is so disturbing! The instructions and the way you explain things is so nonchalant! You dont enjoy making this video. Youre all over the place. Advice, plan your tutorial video. Make a script! Show the parts, even if its an intro to servers! Just for the hell of it!? Def not subscribing😤
Please, languages in english.
video on fire supression would be nice.