Sir, fantastic really amazing to learn everything very nice and easy to gether all information, also please help us to share above all information video, single document to uploaded in your Google drive it would be better to go through offline video mode to download, like a good to read it future
Hello, ONTAP allows non disruptive upgrades. You upgrade a running node, and then you reboot the node. After the node is rebooted and back in the cluster you follow the same procedure for the other node. Best practice is to plan your upgrade using the active iq upgrade advisor. Hope this answers your question.
There is not really a naming convention. It is something you think of yourself. 'vol0' is called 'vol0' by default, but you could change that, like you can rename aggr0 to anything you want. Usually people do not change the name of vol0. aggr0 of node 1 is usually called aggr0, aggr0 of node 2 is automatically renamed by the cluster. I usually change aggr0 to aggr0- to keep the aggr0 name unique in the cluster.
If I have 4 nodes in a cluster then how many cluster management lif will be there, 4 lifs or something else, another question is if node1 in a cluster is hosting the cluster management lif then in case node1 gets failed or collapsed then how the entire cluster can be managed, you told something about fail over but it is bit fuzzy to me, could you please explain it as per the nodes mentioned in my question?
It does not matter how many nodes you have in the cluster, you always have 1 and only 1 cluster management lif. If the node that runs the cluster management lif fails, then that lif will automatically fail over to a another node in the cluster (unless of course when you have a single-node cluster). Next to 1 cluster management lif per cluster, every node in the cluster has a node-management lif. The node-management lifs do not failover to another node in the cluster. So in summary: if you have 6 nodes in the cluster, you have 1 cluster management lif and 6 node-management lifs. In the below example you see the output of a 2-node cluster in which you run "net int show -fields role"... nodes cl1-01 and cl1-02 both have a node-mgmt lif And there is 1 cluster-mgmt lif that runs on cl1-01. The 4 lifs with the 'cluster' role, are Cluster Interconnect lifs. cl1::> net int show -fields role,curr-node (network interface show) vserver lif role curr-node ------- ------------ ------- --------- Cluster cl1-01_clus1 cluster cl1-01 Cluster cl1-01_clus2 cluster cl1-01 Cluster cl1-02_clus1 cluster cl1-02 Cluster cl1-02_clus2 cluster cl1-02 cl1 cl1-01_mgmt1 node-mgmt cl1-01 cl1 cl1-02_mgmt1 node-mgmt cl1-02 cl1 cluster_mgmt cluster-mgmt cl1-01
Thanks for posting this, it's REALLY helpful. @9:25 you state that each Data SVM has one root volume and one data volume each... but data1_vol1 and data2_vol1 are both in n1_aggr1. I see they're each listed as being in their respective SVMs, but wouldn't you more typically have data2_vol1 in n2_aggr1?
Hi I understand what you say, but since SVMs are clusterwide, in principle a volume can be in any aggregate. Imagine an 8 node cluster, SVMs can have volumes in any of the data aggregates.
Hi, a node is a single controller. Two controllers in the same chassis will form a single HA-Pair. So: a node is another name for 'controller'/'system'/host' et cetera. It is a single piece of hardware or a single VM. An ONTAP cluster can exist of 1 single node or of sets of two nodes. Every two nodes that you add to a cluster will be an HA-Pair that share the same disks. Hope this helps...
One of best simple video for one who wants to refresh concepts
thanks..nice video for basic knowledge.
Great explanation, really appreciate .. god bless you :)
thanks for sharing, nice and very clear post.
Best Explanation man..!!! Amazing!!!!
I admit it is a long sit. Thanks for your comment!
Sir, fantastic really amazing to learn everything very nice and easy to gether all information, also please help us to share above all information video, single document to uploaded in your Google drive it would be better to go through offline video mode to download, like a good to read it future
While we doing node upgrade why we keep the node in offline state,can we do the node upgrades in online state?
Hello, ONTAP allows non disruptive upgrades. You upgrade a running node, and then you reboot the node. After the node is rebooted and back in the cluster you follow the same procedure for the other node. Best practice is to plan your upgrade using the active iq upgrade advisor. Hope this answers your question.
I love your course in udemy for cdot. I wanted to know if you plan to do the same type of couse with object storage i.e storagegrid
Thanks for the compliment. Good idea!
@@uadmin i am rooting for it. Let me know once you have it. Appreciate you helping people learn stuff.
Good explanation.......thank you........
thanks!
Is it the only naming convention of root aggregate or there can be something else?
There is not really a naming convention. It is something you think of yourself. 'vol0' is called 'vol0' by default, but you could change that, like you can rename aggr0 to anything you want. Usually people do not change the name of vol0.
aggr0 of node 1 is usually called aggr0, aggr0 of node 2 is automatically renamed by the cluster. I usually change aggr0 to aggr0- to keep the aggr0 name unique in the cluster.
If I have 4 nodes in a cluster then how many cluster management lif will be there, 4 lifs or something else, another question is if node1 in a cluster is hosting the cluster management lif then in case node1 gets failed or collapsed then how the entire cluster can be managed, you told something about fail over but it is bit fuzzy to me, could you please explain it as per the nodes mentioned in my question?
It does not matter how many nodes you have in the cluster, you always have 1 and only 1 cluster management lif. If the node that runs the cluster management lif fails, then that lif will automatically fail over
to a another node in the cluster (unless of course when you have a single-node cluster).
Next to 1 cluster management lif per cluster, every node in the cluster has a node-management lif.
The node-management lifs do not failover to another node in the cluster.
So in summary: if you have 6 nodes in the cluster, you have 1 cluster management lif and 6 node-management lifs.
In the below example you see the output of a 2-node cluster in which you run
"net int show -fields role"... nodes cl1-01 and cl1-02 both have a node-mgmt lif
And there is 1 cluster-mgmt lif that runs on cl1-01.
The 4 lifs with the 'cluster' role, are Cluster Interconnect lifs.
cl1::> net int show -fields role,curr-node
(network interface show)
vserver lif role curr-node
------- ------------ ------- ---------
Cluster cl1-01_clus1 cluster cl1-01
Cluster cl1-01_clus2 cluster cl1-01
Cluster cl1-02_clus1 cluster cl1-02
Cluster cl1-02_clus2 cluster cl1-02
cl1 cl1-01_mgmt1 node-mgmt cl1-01
cl1 cl1-02_mgmt1 node-mgmt cl1-02
cl1 cluster_mgmt cluster-mgmt cl1-01
@@uadmin now understood it, great thanks for u r explanation.
Thanks for posting this, it's REALLY helpful.
@9:25 you state that each Data SVM has one root volume and one data volume each... but data1_vol1 and data2_vol1 are both in n1_aggr1. I see they're each listed as being in their respective SVMs, but wouldn't you more typically have data2_vol1 in n2_aggr1?
Hi I understand what you say, but since SVMs are clusterwide, in principle a volume can be in any aggregate. Imagine an 8 node cluster, SVMs can have volumes in any of the data aggregates.
Very good explanation ,
Great overview!
Thanks!
@uadmin .. kindly provide the UA-cam link for the module 2 and other videos
udemy.com
Hello Sir ,
Can I have the ppt ,if you have any
Sorry for stupid question: Node you mean the single NetApp with 2 controllers or you are counting controllers as nodes?
Hi, a node is a single controller. Two controllers in the same chassis will form a single HA-Pair. So: a node is another name for 'controller'/'system'/host' et cetera. It is a single piece of hardware or a single VM. An ONTAP cluster can exist of 1 single node or of sets of two nodes. Every two nodes that you add to a cluster will be an HA-Pair that share the same disks. Hope this helps...
Thanks
thank you so much for doing this!
you are welcome!
اريد شرح عربي لهذا الفديو
Sorry, I don't know how to do that... Maybe you can turn on arabic subtitles?
Спасибо огромное за подробнейший гайд, весь инет обыскал ни чего путёвого нету!
Какие идиоты придумали такое управление и логическую работу это бред.
Спасибо
Thanks