- 122
- 125 080
Trich Tech Institute
India
Приєднався 31 лип 2018
This is an educational channel where we publish videos on various Emerging Technologies, How Tos , reviews and comparison. You are free to watch them, learn from them and share them among your friends.
#HappyLearning #TrichTech
#HappyLearning #TrichTech
Suse/Opensuse 15 High Availability Cluster - Pacemaker || Fencing Using STONITH Block Device (SBD)
In this video, we have discussed about STONITH Block Device aka SBD, how you can configure SBD, what is disk based and diskless SBD. How to configure them and how to manage SBD devices. Also I tried to explain the SBD start mode and what is the functionality of it.
Переглядів: 1 182
Відео
#Shorts || Ceph Quincy || Create RADOS Gateway (RGW) User || Demonstration
Переглядів 491Рік тому
In this short video, I have demonstrated how easily you can create a RGW user to connect to the Ceph cluster via Rados gateway (Object gateway).
#Shorts || Ceph Storage Quincy || Deploy Rados Gateway (RGW) via Ceph Orchestrator || Demo
Переглядів 391Рік тому
This is a short demo on deploying RGW service on Ceph quincy via Ceph orchestrator
Suse/Opensuse High Availability Cluster - Pacemaker || Understanding STONITH || SUSE / OpenSUSE 15
Переглядів 557Рік тому
In this episode, I have discussed about STONITH devices in Suse/Opensuse Pacemaker cluster, what is the use of it, what is the importance of it. Also I have discussed about various available STONITH devices. I tried to cover the overview of Physical node fencing via BMC and virtual node fencing via libvirt in this episode.
Suse/Opensuse High Availability Cluster - Pacemaker || Understand Fencing || SLE / OpenSUSE 15
Переглядів 501Рік тому
In this video, you will learn about Fencing mechanism in Suse/Opensuse HA Cluster. We discussed on this video what are all methods of fencing, when you need fencing, what is the role of quorum and why you need fencing mechanism in your cluster.
Ceph Storage (Quincy) || Managing RADOS Block Devices (RBD) || Create, List, Map, Mount & Delete ||
Переглядів 981Рік тому
In this video, I have demonstrated how you can easily create a RBD pool, create RBD users with keyring to access RBD related commands, create RBD images, map the images to the client, mount them either manually or automatically and if required how to delete the RBD images.
Ceph Storage (Quincy) || List/Dump/Create (Replicated & Erasure Code)/Delete CRUSH Rules || Demo
Переглядів 537Рік тому
In this video, we have demonstrated how we can list existing CRUSH rules, dump the content of the rules, create a replicated or Erasure coded CRUSH rule and then delete an unwanted CRUSH rule.
Ceph Storage (Quincy) || Manage Ceph Pools || Create, Delete, Rename, Set Quota, Enable Application
Переглядів 698Рік тому
In this video, I have discussed how you can work with Ceph pools. I tried to demonstrate how you can list existing pools, create new pool, Delete existing pools, rename pools, set pool quotas, enable/disable application on pools and finally how you can modify the replication size of the pools along with min_size of the pools. This demo is based on Ceph community edition Quincy. But same can be ...
Ceph Storage (Quincy) || Graceful Poweroff/Shutdown/Reboot Ceph Cluster Nodes via Ceph Orchestrator
Переглядів 608Рік тому
In this video, we have demonstrated how you can gracefully shutdown or reboot Ceph storage cluster nodes. Before rebooting or powering off the nodes you need to stop the running services related to Ceph. In this video, we have shown how you can use the Ceph Orchestrator to stop the services before rebooting the nodes.
Ceph Storage (Quincy) || Graceful Poweroff/Shutdown/Reboot Ceph Cluster Nodes via Systemctl
Переглядів 561Рік тому
In this tutorial video, I have explained how you can gracefully shutdown a Ceph storage cluster nodes. In which order you should stop the services and poweroff the nodes and then in which order you should bring them up so that you don't accidentally corrupt the cluster and face data loss.
Ceph Storage (Quincy) || Upgrade/Update Ceph Cluster || Connected Online || Ceph Orchestrator
Переглядів 579Рік тому
In this video, I have demonstrated how you can upgrade Ceph storage cluster using Ceph orchestrator when your cluster is connected online (have access to the public container registry). Along with the process of upgradation, I have discussed the important prerequisites to meet before starting the upgrade activity. These steps are valid for Ceph community edition versions Quincy, Pacific, Octopu...
Ceph Storage (Quincy) || Deploy/Install/Bootstrap Ceph Cluster using Service Configuration File ||
Переглядів 970Рік тому
In our last video, I have demonstrated how simply you can bootstrap a Ceph storage cluster using Ceph Orchestrator CLI. But that is idea for small/POC cluster setup, not for production large clusters. To bootstrap large cluster, you need something easy to use and faster to deploy. For that Ceph have option available to deploy cluster using service specification/Configuration file. In this video...
Ceph Storage (Quincy) || Deploy/Install/Bootstrap a Ceph Cluster || Cephadm Orchestrator CLI method
Переглядів 3,6 тис.Рік тому
In this video, I have demonstrated how easily you can bootstrap a minimum Ceph Quincy Cluster using Cephadm Orchestrator CLI method. These same steps can be followed to deploy Red Hat/IBM Ceph Storage 5 and 6 and also Suse Ceph Storage.
Ceph Storage (Quincy) || Purge a Ceph Cluster || Destroy/Reclaim/Delete/Remove Ceph Storage Cluster
Переглядів 1,5 тис.Рік тому
In this video, I have demonstrated how you can purge a Quincy Ceph Storage Cluster. Please note: This is a destructive operation. So don't run this on a production cluster. These steps completely remove the cluster, clean up the underlying disks and destroy the data. So once you perform these steps, you will not be able to recover back the data. Usually you can follow these steps when you are s...
Ceph Storage (Quincy) || Replace Object Storage Device (OSD) from Ceph Cluster || Ceph Orchestrator
Переглядів 823Рік тому
In this video, I have demonstrated how you can replace an OSD to the Ceph cluster. This demo is based on Ceph community edition Quincy version. But same can be followed on Red Hat Ceph Storage 5 and 6. Also on Suse Ceph Storage.
Ceph Storage (Quincy) || Remove Object Storage Device (OSD) from Ceph Cluster || Ceph Orchestrator
Переглядів 755Рік тому
Ceph Storage (Quincy) || Remove Object Storage Device (OSD) from Ceph Cluster || Ceph Orchestrator
Ceph Storage (Quincy) || Add Object Storage Device (OSD) to Ceph Cluster || Ceph Orchestrator CLI ||
Переглядів 1,2 тис.Рік тому
Ceph Storage (Quincy) || Add Object Storage Device (OSD) to Ceph Cluster || Ceph Orchestrator CLI ||
Ceph Storage (Quincy) || Setup Ceph Admin Node || Perform Ceph Administration tasks
Переглядів 1,1 тис.Рік тому
Ceph Storage (Quincy) || Setup Ceph Admin Node || Perform Ceph Administration tasks
Ceph Storage [Quincy] || Setup Ceph Client Node || Connect Ceph Cluster and run Ceph commands
Переглядів 5 тис.Рік тому
Ceph Storage [Quincy] || Setup Ceph Client Node || Connect Ceph Cluster and run Ceph commands
Suse/Opensuse High Availability Cluster - Pacemaker || Split Brain Scenario
Переглядів 8012 роки тому
Suse/Opensuse High Availability Cluster - Pacemaker || Split Brain Scenario
Suse/Opensuse High Availability Cluster - Pacemaker || Configure SBD_STARTMODE behavior || Demo
Переглядів 1,1 тис.2 роки тому
Suse/Opensuse High Availability Cluster - Pacemaker || Configure SBD_STARTMODE behavior || Demo
Suse/Opensuse HA Cluster - Pacemaker || Part 3: Deploy a Cluster - Secure Cluster Communications
Переглядів 8324 роки тому
Suse/Opensuse HA Cluster - Pacemaker || Part 3: Deploy a Cluster - Secure Cluster Communications
Suse/Opensuse HA Cluster - Pacemaker || Part 2: Deploy a Cluster - Cluster setup step by step
Переглядів 1 тис.4 роки тому
Suse/Opensuse HA Cluster - Pacemaker || Part 2: Deploy a Cluster - Cluster setup step by step
Suse/Opensuse HA Cluster - Pacemaker || Part 1: Deploy a Cluster - Preparation Checklist
Переглядів 9404 роки тому
Suse/Opensuse HA Cluster - Pacemaker || Part 1: Deploy a Cluster - Preparation Checklist
Suse/Opensuse High Availability Cluster - Pacemaker || Cluster Node Preparation
Переглядів 6424 роки тому
Suse/Opensuse High Availability Cluster - Pacemaker || Cluster Node Preparation
Part 5: Pacemaker Cluster Implementation Requirements || Design Test case, Testing & Documentation
Переглядів 4994 роки тому
Part 5: Pacemaker Cluster Implementation Requirements || Design Test case, Testing & Documentation
Part 4: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Planning Storage
Переглядів 4874 роки тому
Part 4: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Planning Storage
Part 3: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Determining Expectations
Переглядів 4844 роки тому
Part 3: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Determining Expectations
Part 2: Suse Pacemaker Cluster Implementation Requirements || Collecting Required Parameters
Переглядів 6954 роки тому
Part 2: Suse Pacemaker Cluster Implementation Requirements || Collecting Required Parameters
Part 1: Suse Pacemaker Cluster Implementation Requirements || Overview of Implementation process
Переглядів 9844 роки тому
Part 1: Suse Pacemaker Cluster Implementation Requirements || Overview of Implementation process
I was breaking my head for last 12 hours, and you gave me the solution in 8mins, THANKS A LOT...!
answers on Quiz: 1. What function does STONITH provides? answer: STONITH lets us to reset node which hanged because of different reasons (kernel panics, network connection between nodes is lost). 2. Where you must run a STONITH resource? answer: on those nodes which are not being rebooted with the same STONITH device.
Hi Sir, as you said CRM on DC pushes changes to other nodes and understand that every nodes in the cluster will have the CRM??
yes
Great👍
It's been 4 years now but this gives me an idea how to deal with cannonical's nonsense. Cannonical is forcing snapd after every upgrade and update via polkit. Reconfiguring policy may prevent anything from installing you don't want.
is shared disk required ??
This is required for quorum disk. If you have physical server with remote power management, you can use that (e.g. HP servers with iLO).
Great ❤ thanks you
Pretty nice video, clean, neat and in the expected Indian accent. Pretty nice job bro!!! You got my like!!!
Glad you liked it
Great effort. You should make a video on ISCSI target mounting too.
Great video. Great effort. Do you have tutorial on iscsi mounting to a target from CEPH.
Not yet. Also I did not put much effort on ISCSI as it is going to retire from Ceph..
@@TrichTechInstitute What will be the alternative to use instead of iscsi?
@@skawashkar it will be NVME-of docs.ceph.com/en/latest/rbd/nvmeof-overview/
@@TrichTechInstitute Is it possible to configure multiple RDB pools is the iscsi-gateway.cfg? Because when I add multiple pools the rdb-gateway-api service crashes and fails to start.
thanks 🙏
Need course please
Hi Trich, i have installed ovirt 4.5 on centos stream 9 and ceph-cluster v_18 reef by cephadm on Debian 12 and i want to integrate ceph-cluster with ovirt-cluster and use ceph-cluster as storage for my ovirt-cluster so that i can use it for VM as storage? How can i do it?
Hi Ankit, I don't have any video or doc prepared for this scenario. Quick google search provided me below blog. You can check this. Usually you need RBD+ISCSI gateway for this. Please note, Ceph is going to drop support for ISCSI-gw. blogs.ovirt.org/2021/07/using-ceph-only-storage-for-ovirt-datacenter/
@@TrichTechInstitute Thanks for your response but that link tutorial did not work for me.
My CLI have frozen after executing this command
Do you have enough resource available on the node? Like sufficient RAM and CPU? If not, then your nodes can hang.
Nicly explained
I was searching for clear explanation on "promoted" and "demoted", finally found it here (3:23), thanks. simple language, excellent material.
Thanks for your feedback and glad to know it helped you.
Hi, I am trying to create ceph cluster but unable to do, can you guide me from scratch? ManyThanks
Hello, Please share your concerns so that I can help you
Do we need to run sbd -d /dev/SBD -4 180 -1 90 create on all nodes before ha-cluster-join/init. I need to work for 13 Nodes cluster..
You need to create only on the bootstrap node.
simple and nice content.
Thanks
Is it the same way to disband a ceph cluster on a proxmox server?
Did you ever find the answer to this?
Great video. Thank you
thanks for your nice video ...can you please let me know if you have any video for data migration from one cluster to another cluster
Hello, I don't have any video ready for the data migration. I need to prepare one. Please stay tuned.
You have given nice example and clear explanation, I got similar type of requirements to work on ansyc task, you video session helped me to trigger new thoughts, Thanks for all the great work and sharing
Suse is paid . how i make lab ? any opensource alternative is here?
If you want to setup lab you can create a free account and can download suse evolution pack which will be valid for 2-6months. Else you can use Opensuse, which is free alternative of Suse.
you explained and demonstrated the live commands. Good.
Thanks for the feedback.
while configuring the cluster it is asking for qdevice? any idea what it is and how to give address to it?
Qdevice is the quorum disk. If you check the installation video, I have shown there. It is a small shared disk which is available on all the cluster nodes for fencing the node. Usually in a production you need to have 3 quorum disks to avoid single point of failure.
@@TrichTechInstitute I have configured the shared disk bht still after entering a virtual IP it ask for qdevice and its address, here m confused what go provide.
@@tanmaykanungo2051 Are you following my video and getting this requirement? For what you need virtual IP? I guess you can skip the qdevice section.
@@TrichTechInstitute cool, I will skip it for now. Thanks for the tutorial.
@@tanmaykanungo2051 👌
what you mean with update clients? it's not clear. Great work btw!
Client means here the nodes which connects to the Ceph cluster. They will have a package called ceph-common installed on them to connect to the ceph cluster (e.g. via cephfs or RBD, connecting via S3 API of RGW doesn't need ceph-common ). Once you upgrade the ceph cluster, you need to upgrade the ceph-common package also on the same version on these clients and also on the admin node where you run various ceph commands.
@@TrichTechInstitute is there a way that we can chat in private?
@@cyberzeus1 you can ping on connect.trichtech@gmail.com
Kya bol raha kuch nahi samaj raha
Can you please let me know what you did not understand?
Hi there, I have 3 nodes, which have 6 nvme pcie4 and 4x 3.5' hdd. 1. should create multiple OSD for nvme or I should define them bluestore WAL smth 2. should I create crush rule policy to define zone level fail-over for different disk class (hdd or nvme ) and performance and backup make it short. I want to use nvme is performance and disk is backup. thanks
This depends on your use case. You can use these NVMEs as DB/WAL device and HDDs as block device for the OSDs. Or you can create OSDs on these NVMEs along with the HDDs. Then create CRUSH rule to use these NVMEs for production/high performance pool and the HDDs for backup/nonprod data pools. Both are doable.
Hi.. I'm unable to add a previously deleted OSD to the host. Getting the below error. drut@R2S15-bare-107:~$ sudo ceph orch daemon add osd R2S17-bare-104:/dev/sdc Created no osd(s) on host R2S17-bare-104; already created? <<<<<< error drut@R2S15-bare-107:~$
Is there a way to add the OSD from ceph GUI?
@@HarikaChintapally you can add. But before that please ensure the disk is cleaned up. If it has PV/VG/LV from previous OSD, then ceph will not create the OSD. You need to zap the device first or clean up those LV/VG/PV on the disk manually and then try to add it back. Also you can check `ceph orch device ls` output to see if the disk is showing there or not and if showing then whether it is showing as available or not. If not showing or not showing as available, then you will not be able to create the OSD. Hope this helps to add back the OSD.
i've copy ssh.pub key another computer but when run command ceph orch host add my-node1 192.168.50.44 this error appear: Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1756, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 171, in handle_command return dispatch[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 462, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 107, in <lambda> wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731 File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 96, in wrapper return func(*args, **kwargs) File "/usr/share/ceph/mgr/orchestrator/module.py", line 356, in _add_host return self._apply_misc([s], False, Format.plain) File "/usr/share/ceph/mgr/orchestrator/module.py", line 1092, in _apply_misc raise_if_exception(completion) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 225, in raise_if_exception e = pickle.loads(c.serialized_exception) TypeError: __init__() missing 2 required positional arguments: 'hostname' and 'addr'
Are you coping the ssh key from the cephadm shell? You need to copy it from cephadm shell. Also from the error message looks like you are not providing the command correctly. "TypeError: __init__() missing 2 required positional arguments: 'hostname' and 'addr'"
You need to follow below steps: 1. Copy the ssh pub key from the bootstrap node # ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>* 2. Try performing ssh from cephadm shell, you need to do this to accept the ssh key 3. Now try to add the host # ceph orch host add host1 10.10.10.20 You should be able to add the host now.
@@TrichTechInstitute Thanks man it's works
@@Jeverson_Siqueira Glad to help you.
@@Jeverson_Siqueira you build all in one ceph right?
Hi. A quick question. Can you tell me how add OSD if the Storage device is unavailable with reject reason as " Insufficient space (<10 extents) on vgs, LVM detected, locked". As currently all my devices are in this state.
Hi Yogita, did you create PV, VG and LV already on the disks or do they have old LVM or filesystem on the disks?
Hello all , When I am executing command on Linux terminal it is giving me output, but when I am executing it through ansible shell module I am getting no data found. Command is airflow users list. If anyone knows about the possible solution it will be helpful.
Hello Yashwant, Try to execute the command with full path, e.g. `/usr/bin/airflow users list` and see whether it is returning any output.
need ASCS/ERS Cluster Configuration Video
Hello Amila, Sure, I am working on the tutorial for that. It will take sometime.
Difficult to understand what's being said. CC isn't working for this video either.
I will check the video and CC.
Awesome bro.. Thanks!
Welcome 👍
I am working on a Large Scale Cluster Setup for SUSE will you be able to help me with specs?
Please let me know what kind of specs you are looking for?
One of my osd host is down. Idk what to do to force start it.
Is the entire host is down or only the osd down? Did you check for any possible hardware failure?
Great work! keep going
Thanks
Bro where can we get that Java file (last step)
You can download from this link github.com/nmonvisualizer/nmonvisualizer/releases/download/2021-04-04/NMONVisualizer_2021-04-04.jar
How to capture nmon report for particular application ?
What information you want to capture?
Thank you for you work. Could you explain the procedure of creating crush rules for your pools?
Sure.. That is my upcoming video..
Here is the link for the CRUSH rule creation ua-cam.com/video/nTEbMPvqK5o/v-deo.html
good vid, thanks!
Glad you liked it!
Can you do offline install?
Are you asking to installation method on a disconnected environment?
@@TrichTechInstitute yes. I tried it but I am able to up only 1 node and I can't able to add osd from other nodes
@@yegnasivasai You need to create a local container registry so that the nodes can download the images from there... I will try to prepare a video for this. Are you trying it on Community edition or Red Hat Ceph storage?
@@TrichTechInstitute Community edition. I did set up local registry. But i am not able to up three node cluster
@@yegnasivasai did you check the registry connectivity from each node? Also did you setup passwordless authentication to the nodes from the bootstrap node?
Very nice please keep working on more videos related to LINUX
Thank you, I will
nicely done, thanks
Thank you!
Can you show ceph manual install of every components
Sure, I will
Amazing video.. short and crisp… thank you😊
How do you added vdc ?1Gb
I have added it from KVM virtual machine manager console..