Trich Tech Institute
Trich Tech Institute
  • 122
  • 125 080

Відео

#Shorts || Ceph Quincy || Create RADOS Gateway (RGW) User || Demonstration
Переглядів 491Рік тому
In this short video, I have demonstrated how easily you can create a RGW user to connect to the Ceph cluster via Rados gateway (Object gateway).
#Shorts || Ceph Storage Quincy || Deploy Rados Gateway (RGW) via Ceph Orchestrator || Demo
Переглядів 391Рік тому
This is a short demo on deploying RGW service on Ceph quincy via Ceph orchestrator
Suse/Opensuse High Availability Cluster - Pacemaker || Understanding STONITH || SUSE / OpenSUSE 15
Переглядів 557Рік тому
In this episode, I have discussed about STONITH devices in Suse/Opensuse Pacemaker cluster, what is the use of it, what is the importance of it. Also I have discussed about various available STONITH devices. I tried to cover the overview of Physical node fencing via BMC and virtual node fencing via libvirt in this episode.
Suse/Opensuse High Availability Cluster - Pacemaker || Understand Fencing || SLE / OpenSUSE 15
Переглядів 501Рік тому
In this video, you will learn about Fencing mechanism in Suse/Opensuse HA Cluster. We discussed on this video what are all methods of fencing, when you need fencing, what is the role of quorum and why you need fencing mechanism in your cluster.
Ceph Storage (Quincy) || Managing RADOS Block Devices (RBD) || Create, List, Map, Mount & Delete ||
Переглядів 981Рік тому
In this video, I have demonstrated how you can easily create a RBD pool, create RBD users with keyring to access RBD related commands, create RBD images, map the images to the client, mount them either manually or automatically and if required how to delete the RBD images.
Ceph Storage (Quincy) || List/Dump/Create (Replicated & Erasure Code)/Delete CRUSH Rules || Demo
Переглядів 537Рік тому
In this video, we have demonstrated how we can list existing CRUSH rules, dump the content of the rules, create a replicated or Erasure coded CRUSH rule and then delete an unwanted CRUSH rule.
Ceph Storage (Quincy) || Manage Ceph Pools || Create, Delete, Rename, Set Quota, Enable Application
Переглядів 698Рік тому
In this video, I have discussed how you can work with Ceph pools. I tried to demonstrate how you can list existing pools, create new pool, Delete existing pools, rename pools, set pool quotas, enable/disable application on pools and finally how you can modify the replication size of the pools along with min_size of the pools. This demo is based on Ceph community edition Quincy. But same can be ...
Ceph Storage (Quincy) || Graceful Poweroff/Shutdown/Reboot Ceph Cluster Nodes via Ceph Orchestrator
Переглядів 608Рік тому
In this video, we have demonstrated how you can gracefully shutdown or reboot Ceph storage cluster nodes. Before rebooting or powering off the nodes you need to stop the running services related to Ceph. In this video, we have shown how you can use the Ceph Orchestrator to stop the services before rebooting the nodes.
Ceph Storage (Quincy) || Graceful Poweroff/Shutdown/Reboot Ceph Cluster Nodes via Systemctl
Переглядів 561Рік тому
In this tutorial video, I have explained how you can gracefully shutdown a Ceph storage cluster nodes. In which order you should stop the services and poweroff the nodes and then in which order you should bring them up so that you don't accidentally corrupt the cluster and face data loss.
Ceph Storage (Quincy) || Upgrade/Update Ceph Cluster || Connected Online || Ceph Orchestrator
Переглядів 579Рік тому
In this video, I have demonstrated how you can upgrade Ceph storage cluster using Ceph orchestrator when your cluster is connected online (have access to the public container registry). Along with the process of upgradation, I have discussed the important prerequisites to meet before starting the upgrade activity. These steps are valid for Ceph community edition versions Quincy, Pacific, Octopu...
Ceph Storage (Quincy) || Deploy/Install/Bootstrap Ceph Cluster using Service Configuration File ||
Переглядів 970Рік тому
In our last video, I have demonstrated how simply you can bootstrap a Ceph storage cluster using Ceph Orchestrator CLI. But that is idea for small/POC cluster setup, not for production large clusters. To bootstrap large cluster, you need something easy to use and faster to deploy. For that Ceph have option available to deploy cluster using service specification/Configuration file. In this video...
Ceph Storage (Quincy) || Deploy/Install/Bootstrap a Ceph Cluster || Cephadm Orchestrator CLI method
Переглядів 3,6 тис.Рік тому
In this video, I have demonstrated how easily you can bootstrap a minimum Ceph Quincy Cluster using Cephadm Orchestrator CLI method. These same steps can be followed to deploy Red Hat/IBM Ceph Storage 5 and 6 and also Suse Ceph Storage.
Ceph Storage (Quincy) || Purge a Ceph Cluster || Destroy/Reclaim/Delete/Remove Ceph Storage Cluster
Переглядів 1,5 тис.Рік тому
In this video, I have demonstrated how you can purge a Quincy Ceph Storage Cluster. Please note: This is a destructive operation. So don't run this on a production cluster. These steps completely remove the cluster, clean up the underlying disks and destroy the data. So once you perform these steps, you will not be able to recover back the data. Usually you can follow these steps when you are s...
Ceph Storage (Quincy) || Replace Object Storage Device (OSD) from Ceph Cluster || Ceph Orchestrator
Переглядів 823Рік тому
In this video, I have demonstrated how you can replace an OSD to the Ceph cluster. This demo is based on Ceph community edition Quincy version. But same can be followed on Red Hat Ceph Storage 5 and 6. Also on Suse Ceph Storage.
Ceph Storage (Quincy) || Remove Object Storage Device (OSD) from Ceph Cluster || Ceph Orchestrator
Переглядів 755Рік тому
Ceph Storage (Quincy) || Remove Object Storage Device (OSD) from Ceph Cluster || Ceph Orchestrator
Ceph Storage (Quincy) || Add Object Storage Device (OSD) to Ceph Cluster || Ceph Orchestrator CLI ||
Переглядів 1,2 тис.Рік тому
Ceph Storage (Quincy) || Add Object Storage Device (OSD) to Ceph Cluster || Ceph Orchestrator CLI ||
Ceph Storage (Quincy) || Setup Ceph Admin Node || Perform Ceph Administration tasks
Переглядів 1,1 тис.Рік тому
Ceph Storage (Quincy) || Setup Ceph Admin Node || Perform Ceph Administration tasks
Ceph Storage [Quincy] || Setup Ceph Client Node || Connect Ceph Cluster and run Ceph commands
Переглядів 5 тис.Рік тому
Ceph Storage [Quincy] || Setup Ceph Client Node || Connect Ceph Cluster and run Ceph commands
Suse/Opensuse High Availability Cluster - Pacemaker || Split Brain Scenario
Переглядів 8012 роки тому
Suse/Opensuse High Availability Cluster - Pacemaker || Split Brain Scenario
Suse/Opensuse High Availability Cluster - Pacemaker || Configure SBD_STARTMODE behavior || Demo
Переглядів 1,1 тис.2 роки тому
Suse/Opensuse High Availability Cluster - Pacemaker || Configure SBD_STARTMODE behavior || Demo
Suse/Opensuse HA Cluster - Pacemaker || Part 3: Deploy a Cluster - Secure Cluster Communications
Переглядів 8324 роки тому
Suse/Opensuse HA Cluster - Pacemaker || Part 3: Deploy a Cluster - Secure Cluster Communications
Suse/Opensuse HA Cluster - Pacemaker || Part 2: Deploy a Cluster - Cluster setup step by step
Переглядів 1 тис.4 роки тому
Suse/Opensuse HA Cluster - Pacemaker || Part 2: Deploy a Cluster - Cluster setup step by step
Suse/Opensuse HA Cluster - Pacemaker || Part 1: Deploy a Cluster - Preparation Checklist
Переглядів 9404 роки тому
Suse/Opensuse HA Cluster - Pacemaker || Part 1: Deploy a Cluster - Preparation Checklist
Suse/Opensuse High Availability Cluster - Pacemaker || Cluster Node Preparation
Переглядів 6424 роки тому
Suse/Opensuse High Availability Cluster - Pacemaker || Cluster Node Preparation
Part 5: Pacemaker Cluster Implementation Requirements || Design Test case, Testing & Documentation
Переглядів 4994 роки тому
Part 5: Pacemaker Cluster Implementation Requirements || Design Test case, Testing & Documentation
Part 4: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Planning Storage
Переглядів 4874 роки тому
Part 4: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Planning Storage
Part 3: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Determining Expectations
Переглядів 4844 роки тому
Part 3: Suse / Opensuse Pacemaker Cluster Implementation Requirements || Determining Expectations
Part 2: Suse Pacemaker Cluster Implementation Requirements || Collecting Required Parameters
Переглядів 6954 роки тому
Part 2: Suse Pacemaker Cluster Implementation Requirements || Collecting Required Parameters
Part 1: Suse Pacemaker Cluster Implementation Requirements || Overview of Implementation process
Переглядів 9844 роки тому
Part 1: Suse Pacemaker Cluster Implementation Requirements || Overview of Implementation process

КОМЕНТАРІ

  • @RajaLoganathan-rs7vz
    @RajaLoganathan-rs7vz Місяць тому

    I was breaking my head for last 12 hours, and you gave me the solution in 8mins, THANKS A LOT...!

  • @OsinaBoom
    @OsinaBoom 2 місяці тому

    answers on Quiz: 1. What function does STONITH provides? answer: STONITH lets us to reset node which hanged because of different reasons (kernel panics, network connection between nodes is lost). 2. Where you must run a STONITH resource? answer: on those nodes which are not being rebooted with the same STONITH device.

  • @mr.physics2578
    @mr.physics2578 2 місяці тому

    Hi Sir, as you said CRM on DC pushes changes to other nodes and understand that every nodes in the cluster will have the CRM??

  • @mithuparmar
    @mithuparmar 3 місяці тому

    Great👍

  • @raughboy188
    @raughboy188 4 місяці тому

    It's been 4 years now but this gives me an idea how to deal with cannonical's nonsense. Cannonical is forcing snapd after every upgrade and update via polkit. Reconfiguring policy may prevent anything from installing you don't want.

  • @prathmeshlavate4216
    @prathmeshlavate4216 4 місяці тому

    is shared disk required ??

    • @TrichTechInstitute
      @TrichTechInstitute 4 місяці тому

      This is required for quorum disk. If you have physical server with remote power management, you can use that (e.g. HP servers with iLO).

  • @houssemgherissi7310
    @houssemgherissi7310 4 місяці тому

    Great ❤ thanks you

  • @frankmorales2353
    @frankmorales2353 6 місяців тому

    Pretty nice video, clean, neat and in the expected Indian accent. Pretty nice job bro!!! You got my like!!!

  • @skawashkar
    @skawashkar 6 місяців тому

    Great effort. You should make a video on ISCSI target mounting too.

  • @skawashkar
    @skawashkar 6 місяців тому

    Great video. Great effort. Do you have tutorial on iscsi mounting to a target from CEPH.

    • @TrichTechInstitute
      @TrichTechInstitute 6 місяців тому

      Not yet. Also I did not put much effort on ISCSI as it is going to retire from Ceph..

    • @skawashkar
      @skawashkar 6 місяців тому

      @@TrichTechInstitute What will be the alternative to use instead of iscsi?

    • @TrichTechInstitute
      @TrichTechInstitute 6 місяців тому

      @@skawashkar it will be NVME-of docs.ceph.com/en/latest/rbd/nvmeof-overview/

    • @skawashkar
      @skawashkar 6 місяців тому

      @@TrichTechInstitute Is it possible to configure multiple RDB pools is the iscsi-gateway.cfg? Because when I add multiple pools the rdb-gateway-api service crashes and fails to start.

  • @NyorexDC
    @NyorexDC 6 місяців тому

    thanks 🙏

  • @alfarahat
    @alfarahat 7 місяців тому

    Need course please

  • @AnkitSharma-br4se
    @AnkitSharma-br4se 7 місяців тому

    Hi Trich, i have installed ovirt 4.5 on centos stream 9 and ceph-cluster v_18 reef by cephadm on Debian 12 and i want to integrate ceph-cluster with ovirt-cluster and use ceph-cluster as storage for my ovirt-cluster so that i can use it for VM as storage? How can i do it?

    • @TrichTechInstitute
      @TrichTechInstitute 7 місяців тому

      Hi Ankit, I don't have any video or doc prepared for this scenario. Quick google search provided me below blog. You can check this. Usually you need RBD+ISCSI gateway for this. Please note, Ceph is going to drop support for ISCSI-gw. blogs.ovirt.org/2021/07/using-ceph-only-storage-for-ovirt-datacenter/

    • @AnkitSharma-br4se
      @AnkitSharma-br4se 7 місяців тому

      @@TrichTechInstitute Thanks for your response but that link tutorial did not work for me.

  • @skawashkar
    @skawashkar 8 місяців тому

    My CLI have frozen after executing this command

    • @TrichTechInstitute
      @TrichTechInstitute 7 місяців тому

      Do you have enough resource available on the node? Like sufficient RAM and CPU? If not, then your nodes can hang.

  • @syedsaifulla8961
    @syedsaifulla8961 8 місяців тому

    Nicly explained

  • @sshameed
    @sshameed 8 місяців тому

    I was searching for clear explanation on "promoted" and "demoted", finally found it here (3:23), thanks. simple language, excellent material.

    • @TrichTechInstitute
      @TrichTechInstitute 8 місяців тому

      Thanks for your feedback and glad to know it helped you.

  • @AnkitSharma-br4se
    @AnkitSharma-br4se 8 місяців тому

    Hi, I am trying to create ceph cluster but unable to do, can you guide me from scratch? ManyThanks

    • @TrichTechInstitute
      @TrichTechInstitute 8 місяців тому

      Hello, Please share your concerns so that I can help you

  • @saravananarumugam6677
    @saravananarumugam6677 9 місяців тому

    Do we need to run sbd -d /dev/SBD -4 180 -1 90 create on all nodes before ha-cluster-join/init. I need to work for 13 Nodes cluster..

  • @digbijaypaul8474
    @digbijaypaul8474 11 місяців тому

    simple and nice content.

  • @fazlurrahmatullah3017
    @fazlurrahmatullah3017 Рік тому

    Is it the same way to disband a ceph cluster on a proxmox server?

  • @jonesen7792
    @jonesen7792 Рік тому

    Great video. Thank you

  • @subhankarb100
    @subhankarb100 Рік тому

    thanks for your nice video ...can you please let me know if you have any video for data migration from one cluster to another cluster

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Hello, I don't have any video ready for the data migration. I need to prepare one. Please stay tuned.

  • @PavanBSDevarakonda
    @PavanBSDevarakonda Рік тому

    You have given nice example and clear explanation, I got similar type of requirements to work on ansyc task, you video session helped me to trigger new thoughts, Thanks for all the great work and sharing

  • @Tech4You22
    @Tech4You22 Рік тому

    Suse is paid . how i make lab ? any opensource alternative is here?

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      If you want to setup lab you can create a free account and can download suse evolution pack which will be valid for 2-6months. Else you can use Opensuse, which is free alternative of Suse.

  • @khudadadchangezi1565
    @khudadadchangezi1565 Рік тому

    you explained and demonstrated the live commands. Good.

  • @tanmaykanungo2051
    @tanmaykanungo2051 Рік тому

    while configuring the cluster it is asking for qdevice? any idea what it is and how to give address to it?

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Qdevice is the quorum disk. If you check the installation video, I have shown there. It is a small shared disk which is available on all the cluster nodes for fencing the node. Usually in a production you need to have 3 quorum disks to avoid single point of failure.

    • @tanmaykanungo2051
      @tanmaykanungo2051 Рік тому

      @@TrichTechInstitute I have configured the shared disk bht still after entering a virtual IP it ask for qdevice and its address, here m confused what go provide.

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      @@tanmaykanungo2051 Are you following my video and getting this requirement? For what you need virtual IP? I guess you can skip the qdevice section.

    • @tanmaykanungo2051
      @tanmaykanungo2051 Рік тому

      @@TrichTechInstitute cool, I will skip it for now. Thanks for the tutorial.

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      @@tanmaykanungo2051 👌

  • @cyberzeus1
    @cyberzeus1 Рік тому

    what you mean with update clients? it's not clear. Great work btw!

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Client means here the nodes which connects to the Ceph cluster. They will have a package called ceph-common installed on them to connect to the ceph cluster (e.g. via cephfs or RBD, connecting via S3 API of RGW doesn't need ceph-common ). Once you upgrade the ceph cluster, you need to upgrade the ceph-common package also on the same version on these clients and also on the admin node where you run various ceph commands.

    • @cyberzeus1
      @cyberzeus1 Рік тому

      @@TrichTechInstitute is there a way that we can chat in private?

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      @@cyberzeus1 you can ping on connect.trichtech@gmail.com

  • @abhishekchoudhary4682
    @abhishekchoudhary4682 Рік тому

    Kya bol raha kuch nahi samaj raha

  • @x-macpro6161
    @x-macpro6161 Рік тому

    Hi there, I have 3 nodes, which have 6 nvme pcie4 and 4x 3.5' hdd. 1. should create multiple OSD for nvme or I should define them bluestore WAL smth 2. should I create crush rule policy to define zone level fail-over for different disk class (hdd or nvme ) and performance and backup make it short. I want to use nvme is performance and disk is backup. thanks

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      This depends on your use case. You can use these NVMEs as DB/WAL device and HDDs as block device for the OSDs. Or you can create OSDs on these NVMEs along with the HDDs. Then create CRUSH rule to use these NVMEs for production/high performance pool and the HDDs for backup/nonprod data pools. Both are doable.

  • @HarikaChintapally
    @HarikaChintapally Рік тому

    Hi.. I'm unable to add a previously deleted OSD to the host. Getting the below error. drut@R2S15-bare-107:~$ sudo ceph orch daemon add osd R2S17-bare-104:/dev/sdc Created no osd(s) on host R2S17-bare-104; already created? <<<<<< error drut@R2S15-bare-107:~$

    • @HarikaChintapally
      @HarikaChintapally Рік тому

      Is there a way to add the OSD from ceph GUI?

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      @@HarikaChintapally you can add. But before that please ensure the disk is cleaned up. If it has PV/VG/LV from previous OSD, then ceph will not create the OSD. You need to zap the device first or clean up those LV/VG/PV on the disk manually and then try to add it back. Also you can check `ceph orch device ls` output to see if the disk is showing there or not and if showing then whether it is showing as available or not. If not showing or not showing as available, then you will not be able to create the OSD. Hope this helps to add back the OSD.

  • @Jeverson_Siqueira
    @Jeverson_Siqueira Рік тому

    i've copy ssh.pub key another computer but when run command ceph orch host add my-node1 192.168.50.44 this error appear: Error EINVAL: Traceback (most recent call last): File "/usr/share/ceph/mgr/mgr_module.py", line 1756, in _handle_command return self.handle_command(inbuf, cmd) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 171, in handle_command return dispatch[cmd['prefix']].call(self, cmd, inbuf) File "/usr/share/ceph/mgr/mgr_module.py", line 462, in call return self.func(mgr, **kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 107, in <lambda> wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs) # noqa: E731 File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 96, in wrapper return func(*args, **kwargs) File "/usr/share/ceph/mgr/orchestrator/module.py", line 356, in _add_host return self._apply_misc([s], False, Format.plain) File "/usr/share/ceph/mgr/orchestrator/module.py", line 1092, in _apply_misc raise_if_exception(completion) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 225, in raise_if_exception e = pickle.loads(c.serialized_exception) TypeError: __init__() missing 2 required positional arguments: 'hostname' and 'addr'

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Are you coping the ssh key from the cephadm shell? You need to copy it from cephadm shell. Also from the error message looks like you are not providing the command correctly. "TypeError: __init__() missing 2 required positional arguments: 'hostname' and 'addr'"

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      You need to follow below steps: 1. Copy the ssh pub key from the bootstrap node # ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>* 2. Try performing ssh from cephadm shell, you need to do this to accept the ssh key 3. Now try to add the host # ceph orch host add host1 10.10.10.20 You should be able to add the host now.

    • @Jeverson_Siqueira
      @Jeverson_Siqueira Рік тому

      @@TrichTechInstitute Thanks man it's works

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      @@Jeverson_Siqueira Glad to help you.

    • @quyphaminh689
      @quyphaminh689 Рік тому

      @@Jeverson_Siqueira you build all in one ceph right?

  • @yogitamutyala4426
    @yogitamutyala4426 Рік тому

    Hi. A quick question. Can you tell me how add OSD if the Storage device is unavailable with reject reason as " Insufficient space (<10 extents) on vgs, LVM detected, locked". As currently all my devices are in this state.

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Hi Yogita, did you create PV, VG and LV already on the disks or do they have old LVM or filesystem on the disks?

  • @pawaryash007
    @pawaryash007 Рік тому

    Hello all , When I am executing command on Linux terminal it is giving me output, but when I am executing it through ansible shell module I am getting no data found. Command is airflow users list. If anyone knows about the possible solution it will be helpful.

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Hello Yashwant, Try to execute the command with full path, e.g. `/usr/bin/airflow users list` and see whether it is returning any output.

  • @amilaabeysinghe6411
    @amilaabeysinghe6411 Рік тому

    need ASCS/ERS Cluster Configuration Video

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Hello Amila, Sure, I am working on the tutorial for that. It will take sometime.

  • @mainframecn
    @mainframecn Рік тому

    Difficult to understand what's being said. CC isn't working for this video either.

  • @keerthanas7089
    @keerthanas7089 Рік тому

    Awesome bro.. Thanks!

  • @chandu188
    @chandu188 Рік тому

    I am working on a Large Scale Cluster Setup for SUSE will you be able to help me with specs?

  • @mohdsaqib2622
    @mohdsaqib2622 Рік тому

    One of my osd host is down. Idk what to do to force start it.

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Is the entire host is down or only the osd down? Did you check for any possible hardware failure?

  • @PrimeInnerCircle
    @PrimeInnerCircle Рік тому

    Great work! keep going

  • @venkateshmakireddi69
    @venkateshmakireddi69 Рік тому

    Bro where can we get that Java file (last step)

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      You can download from this link github.com/nmonvisualizer/nmonvisualizer/releases/download/2021-04-04/NMONVisualizer_2021-04-04.jar

  • @roopalikhot6567
    @roopalikhot6567 Рік тому

    How to capture nmon report for particular application ?

  • @0xDEADBEEF
    @0xDEADBEEF Рік тому

    Thank you for you work. Could you explain the procedure of creating crush rules for your pools?

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Sure.. That is my upcoming video..

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Here is the link for the CRUSH rule creation ua-cam.com/video/nTEbMPvqK5o/v-deo.html

  • @emilianogutierrez3284
    @emilianogutierrez3284 Рік тому

    good vid, thanks!

  • @yegnasivasai
    @yegnasivasai Рік тому

    Can you do offline install?

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      Are you asking to installation method on a disconnected environment?

    • @yegnasivasai
      @yegnasivasai Рік тому

      @@TrichTechInstitute yes. I tried it but I am able to up only 1 node and I can't able to add osd from other nodes

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      @@yegnasivasai You need to create a local container registry so that the nodes can download the images from there... I will try to prepare a video for this. Are you trying it on Community edition or Red Hat Ceph storage?

    • @yegnasivasai
      @yegnasivasai Рік тому

      @@TrichTechInstitute Community edition. I did set up local registry. But i am not able to up three node cluster

    • @TrichTechInstitute
      @TrichTechInstitute Рік тому

      @@yegnasivasai did you check the registry connectivity from each node? Also did you setup passwordless authentication to the nodes from the bootstrap node?

  • @chandu188
    @chandu188 Рік тому

    Very nice please keep working on more videos related to LINUX

  • @seccentral
    @seccentral Рік тому

    nicely done, thanks

  • @sudhirseelam-wj7kt
    @sudhirseelam-wj7kt Рік тому

    Can you show ceph manual install of every components

  • @TarunikaShrivastava
    @TarunikaShrivastava Рік тому

    Amazing video.. short and crisp… thank you😊

  • @Unknown-rh5jh
    @Unknown-rh5jh Рік тому

    How do you added vdc ?1Gb