That was a super clear and well visualized explanation and by far the best I have ever seen - watched 17 other videos and read a bunch of blogs that only made hyper convergence sound more confusing. I would give 10 thumbs-ups, if I could.
People who are trying to sell you something want to make things seem more complicated and sophisticated than they really are. People who want you to understand try to make things seem as simple as possible.
Very well explained. Personally I dislike hyperconverged because of the exact reasons you said: You can't just easily upscale a singular resource type and if things go wrong, they can go apocalyptically wrong and suddenly you need to be an expert on every part of the chain.
Thanks 🙂 Yes, those moments when you get to peer behind the curtain of "simplicity" and see what's actually going on behind the scenes can be... interesting...
Thanks! Newer doesn't mean better... They really need to focus on the benefits of their own solution rather than trying to badmouth competition. It's an instant turn-off for me if the salesperson resorts to that.
I'm not a developer nor a network engineer, come from the business side, and this was clear as crystal. Great explanation. Also, I came here for explanation on hyperconverged, and low and behold, you also do converged! 👏👏👏
Very good video. Well described without making it too complex. I really hate when things go wrong and only help available is an expert...that scares me the most about HCI.
Thanks! Although a chunk of the marketing focuses on it being easier to use for administrators who aren't experts in the full virtualisation/storage/networking stack, I would definitely recommend that anyone buying it for those reasons should probably have a support contract in place for when it goes wrong and they suddenly need an expert in everything to fix it.
@@ProTechShow Agree. Also, just because its the latest tech doesn’t mean it’s “the best”. As I’m mastering the “old way” of 3Tier, I’ll just skip this development cycle. Thanks for sharing.
Great video, except for one thing. I can tell you as an admin for hyper-converged infrastructure that your negative was Ill-informed. Upgrading storage is surprisingly easy with technologies like CEPH. Networking is where the real headache comes in.. Literally at every level. It was not expensive.. It's just a headache
it's true, but Ceph is designed to handle whole disks on it’s own, and so it handles data object redundancy and multiple parallel writes to disks (OSDs). Hardware RAID controllers are not designed for the Ceph workload, them don’t improve performance or availability and sometimes even reduce performance, at most Host Base Adapter can be used. That's said, It's recommended that you use fast storage SSDs instead of HDDs, and this will greatly increase the overall expense. :-)
@@salvatorecampolo2032 enterprise ssds are an added expense, yes. However, the right tool for the right job. Broadly speaking CEPH for workloads (docker, VMs) AND virtualized zfs NAS solution like truenas scale for long term large storage. Meaning spinning rust still has a place here. For the most part doing that means you can have your cake and eat it too. Providing you use the 3 2 1 rule of backing up your data. Also, I wouldn't use hardware raid anymore. It's dead. Software raid or G-RAID
@@KILLERTX95 yes, in a "hyperconverged world" hardware raid might as well be wounded 'cause in a HCI it's no longer essential, but ... the ICT universe is not just made up of hyperconverged structures, therefore the Market will determine if and when a technology lives or dies. As far as I'm concerned, some announcements like the one that hardware raid is dead, they are just personal interpretations or they are mere propaganda, like those that have been wanting hard drives to die for last decades. After all, if one were to give credit to the various proclamations, following the indications of the maximus expert of Linux, Linus Torvalds, who absolutely advised against the use of ZFS, then those who think like you on ZFS, they would already be out of the game. :-)
@@salvatorecampolo2032 to me, hw raid hasn't seen nearly the innovation the rest of the space has. Even if I HAD to roll out a new "ICT." system, I still would hesitate to use hardware raid. But like you said, things change. Software Raid used to be slow and buggy. These days I think it's better than hardware rajd. I guess we will see. :)
Just started a new job in Philly selling on behalf of Cisco and this was super helpful to me. Still hard to understand but I am glad you posted this. Thank you so much...
7:05 Same performance - HCI is usually faster due to local reads Same resilience - HCI usually has faster rebuild time with less impact on performance as it is handled in a distributed manner
Saying it's the "same" was a generalisation that there isn't a major performance or resilience compromise by going HCI, but in practice it depends on your design and application requirements. In some scenarios your statements about HCI being faster would be true. In other scenarios a 3-tier infrastructure would outperform HCI. Some examples that spring to mind... Shared storage allows for more drives working together, which means more IOPS. A dedicated storage tier allows for more high-speed RAM caching than HCI where the RAM is primarily allocated to running virtual machines. Most host failures in a 3-tier infrastructure wouldn't actually incur a storage rebuild as compute nodes aren't shared with storage nodes and tend to outnumber them - so the rebuild time is more likely to be zero. IMO it really depends on the application requirements, budget, and design used as to which topology would come out on top in a particular situation.
Very well put explanations about HCI... I was really attracted by the bells and whistles of its easy to manage design but, the catch you've described is off-putting indeed. In any case, as I am still curious, I will still look up to setup a certain software vendor's Community Edition in a lab environment to familiarize myself with it and see from there..
If we are setting up our own Private Cloud using a software like Cloudstack, which architecture would be best? The Traditional 3 Tier Architecture? Or Hyperconverged? We need to be able to provision VMs and ensure automatic failover to other nodes should any hardware fail at any time. Theres also the cost element but the architecture is more important at this point.
There isn't really a single answer to that. It depends on how flexible you need each part to be, how you anticipate each component will need to scale, your mix of skills, financing, etc.
I have listened to various videos on this topic and I still have a hard time understanding what these are. I actually work on Cisco Hyper converged as a smart hands. Often the customer complains about how much he hates these devices and how expensive they are. Now I don't get sent to folks whose systems are running, only the customers who are having troubles so I don't have a true sample of how well these do or don't work. Ty for your video I will re-watch this again and maybe again.
iSCSI is slower, because the TCP/IP stack slows it down. And the data must pass via CPU, e.g. CPU is tasked to do I/O where it was offloaded by the DMA controllers of normal SCSI. The SAN works faster and greener when it is built with RDMA (remote direct memory access) devices like Infiniband and the underlying protocol is SRP - SCSI RDMA Protocol
Also note that FC is vastly different than Ethernet. FC uses tokens to avoid congestion and thus has guaranteed delivery. Ethernet can become a big problem if you overload the buffers on a switch.
Hello, sir, your video is so interesting, I don't know if someone had asked this or no, but if an organization has a data center with HCI configuration and want to have a DRC with 3 tiers arch configuration, is that possible? what's plus and minus from doing it or using same infrastructure as the current data center? thank you before.
Provided the DR solution is compatible with both, then yes; although you'd probably want to at least keep the hypervisor the same on both ends. In terms of downsides... many vendors provide built-in replication options that require the same vendor to be used on both ends, so you may find that you need to use something else - particularly if the storage vendors don't match as many replication methods occur at the storage layer.
In this context it's referring to the way the infrastructure's resources scale in relation to the number of people using it. If you take the example of virtual desktop infrastructure - each virtual desktop would have a set amount CPU, RAM, and storage allocated. If you double the number of people you need to double the number of desktops so you double the amount of CPU, RAM, and storage required from the infrastructure. We'd call that linear, and it's easy to scale with HCI - double the number of people, double the requirements for each resource, so double the number of HCI nodes. Now consider a file server that holds documents. If you were to double the number of people it would probably have little impact initially as the storage tends to grow over time, although it would increase the rate of growth. Chances are the effect on the server's CPU and RAM would be negligible unless it was already struggling. In this case the relationship between the number of people and the resources being consumed is not linear and if you just doubled the number of HCI nodes it wouldn’t match your storage growth and you'd end up with a load of excess CPU and RAM.
Jolly well struck and delectably eupeptic 👍 I am appreciating how well this video condenses so much good information. One now may witness HA/failover with HCI for power (mains & battery), network (NICs, switches, cables), storage (spinning and nand), and compute (cpu+mem) . . . even for dartingly labile embassies and femtofauna. Kindest regards, neighbours and friends.
Like the video but I would beg to differ about some of the points regarding linear scaleability. Not all hyper-converged vendors are the same. Some allow non-linear expansion. ie, very compute heavy for VDI say and some storage heavy or even storage only nodes (enough compute for the virtual controller) used for files / object based workloads. Also Hyper-converged solutions are normally based on webscale technology which are designed to fail and have multiple points of resilience and self healing. (full disclosure I work for one such company. )
The typical reasons are cost and compliance. Cloud is great for flexibility and if you can refactor applications for cloud-native technology the cost can be reasonable, but most organisations aren't writing their own apps - they're using off-the-shelf ones and if you simply lift & shift them to the cloud it can get very expensive very quickly. Most I come across are running some kind of hybrid mixture at the moment.
I’ve specialised in VMware, but this “hyper converged” infrastructure is truly doing my head in. Too complex. And it’s kind of stupid. Like you said, if I just want to buy more storage, I don’t want the whole damn node - just disks where I can store my data. Sorry, not a fan.
In the sense that I can move and stuff... yes? The video quality is due to being recorded on a phone. If you're actually interested, I have a behind-the-scenes showing the upgrades I've made since this video and you can compare the before and after... or just see proof that I'm real: ua-cam.com/video/Q0Vu005ltKc/v-deo.html
Great stuff! I see these things on a daily basis at work, turns out its "a thing" and it's called Hyperconvergence?? Who would have thought :D LOL My never ending battles with the storage guys that never want to acknowledge their NAS latency issues, I keep having to explain to everyone my vm's "live" on their storage, like, what more do u want me to do bruh?!?!? LoL
Still watching as this just popped into my recommendation list. Popped by to say: My god man, use newer pictures. DL380 G7, Cisco 3650? and is that an HP MSA 30 (That's a disk shelf, not even a proper Array). I had a good laugh when I saw those.
It's not a real picture of a DL380. It's a vector image representing a server and it happens to look like a DL380, with friendly licensing. If you recognised it as a server, it served its purpose. Looking like a different hardware model wouldn't change its use in the video and would always become outdated in time. What matters more for the purpose of this video is having a licence that allows me to actually use it in the video.
Thank you, you are the only one who talks about the drawbacks, all the vendors only talk about benefits
Cheers. Yeah, vendors have a habit of omitting any information that doesn't encourage you to crack open your wallet!
Which is why I don't trust salesmen.
That was a super clear and well visualized explanation and by far the best I have ever seen - watched 17 other videos and read a bunch of blogs that only made hyper convergence sound more confusing. I would give 10 thumbs-ups, if I could.
Thanks! Maybe you'll find 9 more of my videos you like and you can thumbs-up them instead. 😉
People who are trying to sell you something want to make things seem more complicated and sophisticated than they really are. People who want you to understand try to make things seem as simple as possible.
Very well explained.
Personally I dislike hyperconverged because of the exact reasons you said: You can't just easily upscale a singular resource type and if things go wrong, they can go apocalyptically wrong and suddenly you need to be an expert on every part of the chain.
Thanks 🙂
Yes, those moments when you get to peer behind the curtain of "simplicity" and see what's actually going on behind the scenes can be... interesting...
Great video. Hyper converge vendors are currently trying to shame traditional 3 tiers architecture as if it was obsolete and last century's tech.
Thanks! Newer doesn't mean better... They really need to focus on the benefits of their own solution rather than trying to badmouth competition. It's an instant turn-off for me if the salesperson resorts to that.
I'm not a developer nor a network engineer, come from the business side, and this was clear as crystal. Great explanation. Also, I came here for explanation on hyperconverged, and low and behold, you also do converged! 👏👏👏
Thanks! Glad it was helpful.
This was perfect. All key points touched without too much complexity.
Thanks! Glad it was useful.
Very good video. Well described without making it too complex. I really hate when things go wrong and only help available is an expert...that scares me the most about HCI.
Thanks! Although a chunk of the marketing focuses on it being easier to use for administrators who aren't experts in the full virtualisation/storage/networking stack, I would definitely recommend that anyone buying it for those reasons should probably have a support contract in place for when it goes wrong and they suddenly need an expert in everything to fix it.
Learning from this video I understand now that HCI is not for me and my customers (I'm an IT admin). Thank you for taking the time to explain.
Glad I could help. There's no one-size-fits-all answer in IT so just because it works well for one person doesn't mean it will for another.
@@ProTechShow Agree. Also, just because its the latest tech doesn’t mean it’s “the best”. As I’m mastering the “old way” of 3Tier, I’ll just skip this development cycle. Thanks for sharing.
Great video, except for one thing. I can tell you as an admin for hyper-converged infrastructure that your negative was Ill-informed. Upgrading storage is surprisingly easy with technologies like CEPH. Networking is where the real headache comes in.. Literally at every level. It was not expensive.. It's just a headache
it's true, but Ceph is designed to handle whole disks on it’s own, and so it handles data object redundancy and multiple parallel writes to disks (OSDs). Hardware RAID controllers are not designed for the Ceph workload, them don’t improve performance or availability and sometimes even reduce performance, at most Host Base Adapter can be used. That's said, It's recommended that you use fast storage SSDs instead of HDDs, and this will greatly increase the overall expense.
:-)
@@salvatorecampolo2032 enterprise ssds are an added expense, yes. However, the right tool for the right job. Broadly speaking CEPH for workloads (docker, VMs) AND virtualized zfs NAS solution like truenas scale for long term large storage. Meaning spinning rust still has a place here.
For the most part doing that means you can have your cake and eat it too. Providing you use the 3 2 1 rule of backing up your data.
Also, I wouldn't use hardware raid anymore. It's dead. Software raid or G-RAID
@@KILLERTX95
yes, in a "hyperconverged world" hardware raid might as well be wounded 'cause in a HCI it's no longer essential, but ... the ICT universe is not just made up of hyperconverged structures, therefore the Market will determine if and when a technology lives or dies.
As far as I'm concerned, some announcements like the one that hardware raid is dead, they are just personal interpretations or they are mere propaganda, like those that have been wanting hard drives to die for last decades.
After all, if one were to give credit to the various proclamations, following the indications of the maximus expert of Linux, Linus Torvalds, who absolutely advised against the use of ZFS, then those who think like you on ZFS, they would already be out of the game.
:-)
@@salvatorecampolo2032 to me, hw raid hasn't seen nearly the innovation the rest of the space has. Even if I HAD to roll out a new "ICT." system, I still would hesitate to use hardware raid.
But like you said, things change. Software Raid used to be slow and buggy. These days I think it's better than hardware rajd. I guess we will see.
:)
Thanks for the very detailed & visualized explanations! Been on Google for hours and this is the best clarification by far!
Glad it helped
Just started a new job in Philly selling on behalf of Cisco and this was super helpful to me. Still hard to understand but I am glad you posted this. Thank you so much...
Thanks Bobby
Thanks for the vid. Will retain our trad setup instead of switching to Nutanix.
Great video, and you explain so well! This helped a LOT!
Thanks!
Thank you for the explanation. Very easy to understand!
You're welcome
7:05
Same performance - HCI is usually faster due to local reads
Same resilience - HCI usually has faster rebuild time with less impact on performance as it is handled in a distributed manner
Saying it's the "same" was a generalisation that there isn't a major performance or resilience compromise by going HCI, but in practice it depends on your design and application requirements. In some scenarios your statements about HCI being faster would be true. In other scenarios a 3-tier infrastructure would outperform HCI. Some examples that spring to mind... Shared storage allows for more drives working together, which means more IOPS. A dedicated storage tier allows for more high-speed RAM caching than HCI where the RAM is primarily allocated to running virtual machines. Most host failures in a 3-tier infrastructure wouldn't actually incur a storage rebuild as compute nodes aren't shared with storage nodes and tend to outnumber them - so the rebuild time is more likely to be zero. IMO it really depends on the application requirements, budget, and design used as to which topology would come out on top in a particular situation.
Very good explanaton!! I want to hear more intrastucture knowledge from you!
Thanks!
Hey I want to thank you I love learning and your videos have been great!!! Keep it up!
Thanks!
Very good, simple explanation taking into account the traditional infrastructure and how the solutions have evolved. Thanks 👍👍
Thanks!
Thanks, this primer was exactly what I needed.
Glad it helped! 🙂
This is a really useful and insightful overview!
Thanks Scott! 🙂
Very well put explanations about HCI... I was really attracted by the bells and whistles of its easy to manage design but, the catch you've described is off-putting indeed.
In any case, as I am still curious, I will still look up to setup a certain software vendor's Community Edition in a lab environment to familiarize myself with it and see from there..
Thanks. No better way to decide than to give it a try for yourself. Good luck!
If we are setting up our own Private Cloud using a software like Cloudstack, which architecture would be best? The Traditional 3 Tier Architecture? Or Hyperconverged?
We need to be able to provision VMs and ensure automatic failover to other nodes should any hardware fail at any time. Theres also the cost element but the architecture is more important at this point.
There isn't really a single answer to that. It depends on how flexible you need each part to be, how you anticipate each component will need to scale, your mix of skills, financing, etc.
Great video!!! I am looking for the cost-effectiveness, with respect to licensing, of HCI vs dHCI
So much good information in one video 👍👍👍
Thanks!
Awesome explaination...this video really helps to evaluate our requiremnt. thank you.
Thanks! Glad it was useful 🙂
I see this as a way for a cloud solutions to beef up the edge to improve things like DaaS.
I have listened to various videos on this topic and I still have a hard time understanding what these are. I actually work on Cisco Hyper converged as a smart hands. Often the customer complains about how much he hates these devices and how expensive they are. Now I don't get sent to folks whose systems are running, only the customers who are having troubles so I don't have a true sample of how well these do or don't work. Ty for your video I will re-watch this again and maybe again.
Send them to HPE lol
iSCSI is slower, because the TCP/IP stack slows it down. And the data must pass via CPU, e.g. CPU is tasked to do I/O where it was offloaded by the DMA controllers of normal SCSI. The SAN works faster and greener when it is built with RDMA (remote direct memory access) devices like Infiniband and the underlying protocol is SRP - SCSI RDMA Protocol
Also note that FC is vastly different than Ethernet. FC uses tokens to avoid congestion and thus has guaranteed delivery. Ethernet can become a big problem if you overload the buffers on a switch.
Nice Video, very clear, thank for shared
You're welcome 🙂
Awesome video, thanks! Great to meet a fellow enterprise tech enthusiast. 😊 Subscribed.
Thanks! Glad it was of use.
Very well explained. Thank you
Thanks!
With NetApp it’s easy to buy extra DS(Disk Storage) when you already have a FAS.
I heard Nutanix doesn't support/allow hardware after 5 years, is that true?
Good view, liked the video. Hope to see more deeper analysis on the key technologies used in HCI.
Thanks! I'll put that down on the future ideas list.
Very helpful overview
Thanks 🙂
Thanks for the explanation🙏
You're welcome 🙂
Hello, sir, your video is so interesting, I don't know if someone had asked this or no, but if an organization has a data center with HCI configuration and want to have a DRC with 3 tiers arch configuration, is that possible? what's plus and minus from doing it or using same infrastructure as the current data center? thank you before.
Provided the DR solution is compatible with both, then yes; although you'd probably want to at least keep the hypervisor the same on both ends.
In terms of downsides... many vendors provide built-in replication options that require the same vendor to be used on both ends, so you may find that you need to use something else - particularly if the storage vendors don't match as many replication methods occur at the storage layer.
great explanation! thanks! :)
You're welcome. Glad it helped.
Can you talk about dHCI which claims to solve HCI drawbacks
Awesome explanation!
Thanks!
What exactly is meant by linear and nonlinear when it comes to infrastructure?
In this context it's referring to the way the infrastructure's resources scale in relation to the number of people using it.
If you take the example of virtual desktop infrastructure - each virtual desktop would have a set amount CPU, RAM, and storage allocated. If you double the number of people you need to double the number of desktops so you double the amount of CPU, RAM, and storage required from the infrastructure. We'd call that linear, and it's easy to scale with HCI - double the number of people, double the requirements for each resource, so double the number of HCI nodes.
Now consider a file server that holds documents. If you were to double the number of people it would probably have little impact initially as the storage tends to grow over time, although it would increase the rate of growth. Chances are the effect on the server's CPU and RAM would be negligible unless it was already struggling. In this case the relationship between the number of people and the resources being consumed is not linear and if you just doubled the number of HCI nodes it wouldn’t match your storage growth and you'd end up with a load of excess CPU and RAM.
Jolly well struck and delectably eupeptic 👍 I am appreciating how well this video condenses so much good information.
One now may witness HA/failover with HCI for power (mains & battery), network (NICs, switches, cables), storage (spinning and nand), and compute (cpu+mem) . . . even for dartingly labile embassies and femtofauna.
Kindest regards, neighbours and friends.
Like the video but I would beg to differ about some of the points regarding linear scaleability. Not all hyper-converged vendors are the same. Some allow non-linear expansion. ie, very compute heavy for VDI say and some storage heavy or even storage only nodes (enough compute for the virtual controller) used for files / object based workloads. Also Hyper-converged solutions are normally based on webscale technology which are designed to fail and have multiple points of resilience and self healing. (full disclosure I work for one such company. )
Good explanation.
Thanks!
nicely explained ..
Thanks 🙂
Well done video!
Thanks! Glad you appreciated it.
Well explained video
Cheers!
quite interesting, thank you
You're welcome 🙂
I need to setup a project of this tech, any Spanish speaking specialist available?
Makes complete horse sense. Thank you
Thanks 🙂
Merci.
De rien 🙂
1 year squad yo!
🙌
That cat 3:55 😺😄
Nice explanation, but kind of fast for me.
No idea yet on HCI. Comment for the algorithm/
Thanks for the comment 🙂
why wouldn't you go with cloud, all those problems go away and you can focus on the product you're building instead
The typical reasons are cost and compliance. Cloud is great for flexibility and if you can refactor applications for cloud-native technology the cost can be reasonable, but most organisations aren't writing their own apps - they're using off-the-shelf ones and if you simply lift & shift them to the cloud it can get very expensive very quickly. Most I come across are running some kind of hybrid mixture at the moment.
@@ProTechShow ok, thanks!
I’ve specialised in VMware, but this “hyper converged” infrastructure is truly doing my head in. Too complex. And it’s kind of stupid. Like you said, if I just want to buy more storage, I don’t want the whole damn node - just disks where I can store my data. Sorry, not a fan.
are u animated?
In the sense that I can move and stuff... yes?
The video quality is due to being recorded on a phone. If you're actually interested, I have a behind-the-scenes showing the upgrades I've made since this video and you can compare the before and after... or just see proof that I'm real: ua-cam.com/video/Q0Vu005ltKc/v-deo.html
Great stuff! I see these things on a daily basis at work, turns out its "a thing" and it's called Hyperconvergence?? Who would have thought :D LOL
My never ending battles with the storage guys that never want to acknowledge their NAS latency issues, I keep having to explain to everyone my vm's "live" on their storage, like, what more do u want me to do bruh?!?!? LoL
Can't build a product without a good buzzword, can you? 😁
Still watching as this just popped into my recommendation list.
Popped by to say: My god man, use newer pictures.
DL380 G7, Cisco 3650? and is that an HP MSA 30 (That's a disk shelf, not even a proper Array).
I had a good laugh when I saw those.
It's not a real picture of a DL380. It's a vector image representing a server and it happens to look like a DL380, with friendly licensing. If you recognised it as a server, it served its purpose. Looking like a different hardware model wouldn't change its use in the video and would always become outdated in time. What matters more for the purpose of this video is having a licence that allows me to actually use it in the video.
Harvester HCI - is more better!
So amazing and informative.. however , for Gods sake, speak slowly, with tutorial,learning apead and accent. So everybody can catchup
just move to the cloud!
It's a good choice for some, and not for others. There's never a one-size-fits-all answer.
Thank you for such a clearly explained video about this!
Glad it was useful!