Before you begin
Before registering the IBM Storage Ceph nodes, be sure that you have:
- At least one running virtual machine (VM) or bare-metal server with an active internet connection.
 - Red Hat Enterprise Linux 9.4 or 9.5 with ansible-core bundled into AppStream. 
sudo dnf info ansible-core - If not available install it using: 
sudo dnf install ansible-core - A valid IBM subscription with the appropriate entitlements.
 - Root-level access to all nodes.
 - For the latest supported Red Hat Enterprise Linux versions, see Compatibility matrix
 
Registering Storage Ceph Nodes
Register the system, and when prompted, enter your Red Hat customer portal credentials.
sudo subscription-manager registerFor example,
[root@admin ~]# subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: USERNAME
Password: PASSWORD
The system has been registered with ID: ID
The registered system name is: SYSTEM_NAME
Pull the latest subscription.subscription-manager refreshFor example,
[root@admin ~]# subscription-manager refresh
All local data refreshedDisable the software repositories.
subscription-manager repos --disable=*Enable the Red Hat Enterprise Linux BaseOS and appstream repositories.
subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpmsUpdate the system.
dnf updateEnable the ceph-tools repository.
curl https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-8-rhel-9.repo | sudo tee /etc/yum.repos.d/ibm-storage-ceph-8-rhel-9.repoAdd the license to install IBM Storage Ceph and click Accept.
dnf install ibm-storage-ceph-licenseAccept the provisions.
touch /usr/share/ibm-storage-ceph-license/acceptRepeat the above steps on all the nodes of the storage cluster.
Install cephadm-ansible only on admin node.
dnf install cephadm-ansible -y
dnf install cephadm -yConfiguring Ansible inventory location
Navigate to the /usr/share/cephadm-ansible/ directory.
cd /usr/share/cephadm-ansibleOptional: Create subdirectories for staging and production.
mkdir -p inventory/staging inventory/productionOptional: Edit the ansible.cfg file and add the following line to assign a default inventory location.
[defaults]
inventory = ./inventory/staging
Optional: Create an inventory hosts file for each environment.
touch inventory/staging/hosts
touch inventory/production/hostsOpen and edit each hosts file and add the nodes and [admin] group.
NODE_NAME_1
NODE_NAME_2
[admin]
ADMIN_NODE_NAME_1Replace NODE_NAME_1 and NODE_NAME_2 with the Ceph nodes such as monitors, OSDs, MDSs, and gateway nodes.
Replace ADMIN_NODE_NAME_1 with the name of the node where the admin keyring is stored.
Enabling SSH login as root user on Red Hat Enterprise Linux 9
Prerequisites
- Root-level access to all nodes.
 
Procedure
Open the etc/ssh/sshd_config file and set the PermitRootLoginto yes.
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.confRestart the SSH service.
systemctl restart sshd.serviceLogin to the node as the root user.
ssh root@HOST_NAMEReplace HOST_NAME with the host name of the Ceph node.
ssh root@host01Enter the root password when prompted.
Creating an Ansible user with sudo access
Complete these steps on each node in the storage cluster.
Log in to the node as the root user.
ssh root@HOST_NAMEReplace HOST_NAME with the host name of the Ceph node.
Create a new Ansible user.
adduser ceph-adminReplace USER_NAMEwith the new user name for the Ansible user.
Important: Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.
Set a new password for this user.
passwd ceph-adminConfigure sudo access for the newly created user.
cat << EOF | sudo tee /etc/sudoers.d/ceph-admin
ceph-admin ALL=(ALL) NOPASSWD:ALL
EOFAssign the correct file permissions to the new file.
chmod 0440 /etc/sudoers.d/ceph-adminEnabling password-less SSH for Ansible
Generate the SSH key pair, accept the default file name and leave the passphrase empty.
su - ceph-admin
ssh-keygenCopy the public key to all nodes in the storage cluster.
ssh-copy-id ceph-admin@host01If there is some issue while copying the ssh-key then manually add in the authorized_keys file in the other nodes.
Create the user's SSH config file.
touch ~/.ssh/configOpen the config file for editing.
Set values for the Hostname and User options for each node in the storage cluster.
Important: By configuring the ~/.ssh/config file you do not have to specify the -u USER_NAME option each time you execute the ansible-playbook command.
Host ceph-node-admin
Hostname 
User ceph-admin   
Host ceph-node-1
Hostname 65.2.172.224
User ceph-admin
Host ceph-node-2
Hostname 65.2.182.40
User ceph-admin
Host ceph-node-3
Hostname 13.233.8.7
User ceph-adminSet the correct file permissions for the ~/.ssh/config file.
chmod 600 ~/.ssh/configRunning the preflight playbook
Note: In the following procedure, host01 is the bootstrap node.
Navigate to the /usr/share/cephadm-ansible directory.
Open and edit the hosts file and add your nodes.
ceph-node-1
ceph-node-2
ceph-node-3
[admin]
ceph-node-adminRun the preflight playbook.
Run the preflight playbook either by running the playbook for all hosts in the cluster or for a selected set of hosts in the cluster.
Note: When running the preflight playbook,
cephadm-ansibleautomatically installschronyandceph-commonpackages on the client nodes.
After installation is complete, cephadm resides in the /usr/sbin/ directory.
Run the playbook for all hosts in the cluster.
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=ibm"For example,
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=ibm"Bootstrapping a new storage cluster
Before you begin
Before you begin, make sure that you have the following prerequisites in place:
- An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
 - Login access to cp.icr.io/cp. For information about obtaining credentials for cp.icr.io/cp, see Obtaining an entitlement key.
 - A minimum of 10 GB of free space for /var/lib/containers/.
 - Root-level access to all nodes.
 - Bootstrap a storage cluster.
 
cephadm bootstrap --cluster-network 172.31.0.0/16 --mon-ip 172.31.34.236 --registry-url cp.icr.io/cp --registry-username cp --registry-password  --yes-i-knowOpen Port Ranges
Apart from the common port range (22, 80, 443). The following are the port numbers for IBM Storage Ceph:
- 3300
 - 6789
 - 8443
 - 8765
 - 9093
 - 9283
 
Distributing SSH keys
You can use the cephadm-distribute-ssh-key.yml playbook to distribute the SSH keys instead of creating and distributing the keys manually.
About this task
The playbook distributes an SSH public key over all hosts in the inventory. You can also generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Procedure
Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node.
[ansible@admin ~]$ cd /usr/share/cephadm-ansibleFrom the Ansible administration node, distribute the SSH keys. The optional cephadm_pubkey_path parameter is the full path name of the SSH public key file on the ansible controller host.
Note: If cephadm_pubkey_path is not specified, the playbook gets the key from the cephadm get-pub-key command. This implies that you have at least bootstrapped a minimal cluster.
ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node=ADMIN_NODE_NAME_1For example:
[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_nodeAdding multiple hosts
Before you begin
Before you begin, make sure that you have the following prerequisites in place:
- A storage cluster that has been installed and bootstrapped.
 - Root-level access to all nodes in the storage cluster.
 - About this task
 
Note: Be sure to create the hosts.yaml file within a host container, or create the file on the local host and then use the cephadm shell to mount the file within the container. The cephadm shell automatically places mounted files in /mnt. If you create the file directly on the local host and then apply the hosts.yaml file instead of mounting it, you might see a File does not exist error.
Procedure
- Copy over the public ssh key to each of the hosts that you want to add.
 - Use a text editor to create a hosts.yaml file.
 - Add the host descriptions to the hosts.yaml file.
 
Include the labels to identify placements for the daemons that you want to deploy on each host. Separate each host description with three dashes (---).
service_type: host
addr:
hostname: host02
labels:
- mon
- osd
- mgr
---
service_type: host
addr:
hostname: host03
labels:
- mon
- osd
- mgr
---
service_type: host
addr:
hostname: host04
labels:
- mon
- osdMount the hosts.yaml file.
If you created the hosts.yaml file directly on the local host, use the 
cephadm shell to mount the file.
cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yamlFor example,
[root@host01 ~]# cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yamlIf you created the hosts.yaml file within the host container, run the ceph orch apply command. For example,
[ceph: root@host01 /]# ceph orch apply -i hosts.yaml
Added host 'host02' with addr '10.10.128.69'
Added host 'host03' with addr '10.10.128.70'
Added host 'host04' with addr '10.10.128.71'View the list of hosts and their labels.
[ceph: root@host01 /]# ceph orch host ls
HOST      ADDR      LABELS          STATUS
host02   host02    mon,osd,mgr
host03   host03    mon,osd,mgr
host04   host04    mon,osd