Before you begin
Before registering the IBM Storage Ceph nodes, be sure that you have:
- At least one running virtual machine (VM) or bare-metal server with an active internet connection.
- Red Hat Enterprise Linux 9.4 or 9.5 with ansible-core bundled into AppStream.
sudo dnf info ansible-core
- If not available install it using:
sudo dnf install ansible-core
- A valid IBM subscription with the appropriate entitlements.
- Root-level access to all nodes.
- For the latest supported Red Hat Enterprise Linux versions, see Compatibility matrix
Registering Storage Ceph Nodes
Register the system, and when prompted, enter your Red Hat customer portal credentials.
sudo subscription-manager register
For example,
[root@admin ~]# subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: USERNAME
Password: PASSWORD
The system has been registered with ID: ID
The registered system name is: SYSTEM_NAME
Pull the latest subscription.
subscription-manager refresh
For example,
[root@admin ~]# subscription-manager refresh
All local data refreshed
Disable the software repositories.
subscription-manager repos --disable=*
Enable the Red Hat Enterprise Linux BaseOS and appstream repositories.
subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms
subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
Update the system.
dnf update
Enable the ceph-tools repository.
curl https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-8-rhel-9.repo | sudo tee /etc/yum.repos.d/ibm-storage-ceph-8-rhel-9.repo
Add the license to install IBM Storage Ceph and click Accept.
dnf install ibm-storage-ceph-license
Accept the provisions.
touch /usr/share/ibm-storage-ceph-license/accept
Repeat the above steps on all the nodes of the storage cluster.
Install cephadm-ansible
only on admin node.
dnf install cephadm-ansible -y
dnf install cephadm -y
Configuring Ansible inventory location
Navigate to the /usr/share/cephadm-ansible/
directory.
cd /usr/share/cephadm-ansible
Optional: Create subdirectories for staging and production.
mkdir -p inventory/staging inventory/production
Optional: Edit the ansible.cfg file and add the following line to assign a default inventory location.
[defaults]
inventory = ./inventory/staging
Optional: Create an inventory hosts file for each environment.
touch inventory/staging/hosts
touch inventory/production/hosts
Open and edit each hosts file and add the nodes and [admin] group.
NODE_NAME_1
NODE_NAME_2
[admin]
ADMIN_NODE_NAME_1
Replace NODE_NAME_1
and NODE_NAME_2
with the Ceph nodes such as monitors, OSDs, MDSs, and gateway nodes.
Replace ADMIN_NODE_NAME_1
with the name of the node where the admin keyring is stored.
Enabling SSH login as root user on Red Hat Enterprise Linux 9
Prerequisites
- Root-level access to all nodes.
Procedure
Open the etc/ssh/sshd_config
file and set the PermitRootLogin
to yes.
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf
Restart the SSH service.
systemctl restart sshd.service
Login to the node as the root user.
ssh root@HOST_NAME
Replace HOST_NAME with the host name of the Ceph node.
ssh root@host01
Enter the root password when prompted.
Creating an Ansible user with sudo access
Complete these steps on each node in the storage cluster.
Log in to the node as the root user.
ssh root@HOST_NAME
Replace HOST_NAME
with the host name of the Ceph node.
Create a new Ansible user.
adduser ceph-admin
Replace USER_NAME
with the new user name for the Ansible user.
Important: Do not use ceph as the user name. The ceph user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.
Set a new password for this user.
passwd ceph-admin
Configure sudo
access for the newly created user.
cat << EOF | sudo tee /etc/sudoers.d/ceph-admin
ceph-admin ALL=(ALL) NOPASSWD:ALL
EOF
Assign the correct file permissions to the new file.
chmod 0440 /etc/sudoers.d/ceph-admin
Enabling password-less SSH for Ansible
Generate the SSH key pair, accept the default file name and leave the passphrase empty.
su - ceph-admin
ssh-keygen
Copy the public key to all nodes in the storage cluster.
ssh-copy-id ceph-admin@host01
If there is some issue while copying the ssh-key then manually add in the authorized_keys file in the other nodes.
Create the user's SSH config file.
touch ~/.ssh/config
Open the config file for editing.
Set values for the Hostname and User options for each node in the storage cluster.
Important: By configuring the ~/.ssh/config file you do not have to specify the -u USER_NAME option each time you execute the ansible-playbook command.
Host ceph-node-admin
Hostname
User ceph-admin
Host ceph-node-1
Hostname 65.2.172.224
User ceph-admin
Host ceph-node-2
Hostname 65.2.182.40
User ceph-admin
Host ceph-node-3
Hostname 13.233.8.7
User ceph-admin
Set the correct file permissions for the ~/.ssh/config
file.
chmod 600 ~/.ssh/config
Running the preflight playbook
Note: In the following procedure, host01 is the bootstrap node.
Navigate to the /usr/share/cephadm-ansible
directory.
Open and edit the hosts file and add your nodes.
ceph-node-1
ceph-node-2
ceph-node-3
[admin]
ceph-node-admin
Run the preflight playbook.
Run the preflight playbook either by running the playbook for all hosts in the cluster or for a selected set of hosts in the cluster.
Note: When running the preflight playbook,
cephadm-ansible
automatically installschrony
andceph-common
packages on the client nodes.
After installation is complete, cephadm resides in the /usr/sbin/ directory.
Run the playbook for all hosts in the cluster.
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=ibm"
For example,
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=ibm"
Bootstrapping a new storage cluster
Before you begin
Before you begin, make sure that you have the following prerequisites in place:
- An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
- Login access to cp.icr.io/cp. For information about obtaining credentials for cp.icr.io/cp, see Obtaining an entitlement key.
- A minimum of 10 GB of free space for /var/lib/containers/.
- Root-level access to all nodes.
- Bootstrap a storage cluster.
cephadm bootstrap --cluster-network 172.31.0.0/16 --mon-ip 172.31.34.236 --registry-url cp.icr.io/cp --registry-username cp --registry-password --yes-i-know
Open Port Ranges
Apart from the common port range (22, 80, 443). The following are the port numbers for IBM Storage Ceph:
- 3300
- 6789
- 8443
- 8765
- 9093
- 9283
Distributing SSH keys
You can use the cephadm-distribute-ssh-key.yml
playbook to distribute the SSH keys instead of creating and distributing the keys manually.
About this task
The playbook distributes an SSH public key over all hosts in the inventory. You can also generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Procedure
Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node.
[ansible@admin ~]$ cd /usr/share/cephadm-ansible
From the Ansible administration node, distribute the SSH keys. The optional cephadm_pubkey_path
parameter is the full path name of the SSH public key file on the ansible controller host.
Note: If cephadm_pubkey_path is not specified, the playbook gets the key from the cephadm get-pub-key command. This implies that you have at least bootstrapped a minimal cluster.
ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node=ADMIN_NODE_NAME_1
For example:
[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01
[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node
Adding multiple hosts
Before you begin
Before you begin, make sure that you have the following prerequisites in place:
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
- About this task
Note: Be sure to create the hosts.yaml file within a host container, or create the file on the local host and then use the cephadm shell to mount the file within the container. The cephadm shell automatically places mounted files in /mnt. If you create the file directly on the local host and then apply the hosts.yaml file instead of mounting it, you might see a File does not exist error.
Procedure
- Copy over the public ssh key to each of the hosts that you want to add.
- Use a text editor to create a hosts.yaml file.
- Add the host descriptions to the hosts.yaml file.
Include the labels to identify placements for the daemons that you want to deploy on each host. Separate each host description with three dashes (---).
service_type: host
addr:
hostname: host02
labels:
- mon
- osd
- mgr
---
service_type: host
addr:
hostname: host03
labels:
- mon
- osd
- mgr
---
service_type: host
addr:
hostname: host04
labels:
- mon
- osd
Mount the hosts.yaml
file.
If you created the hosts.yaml file directly on the local host, use the
cephadm shell to mount the file.
cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml
For example,
[root@host01 ~]# cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml
If you created the hosts.yaml file within the host container, run the ceph orch apply command. For example,
[ceph: root@host01 /]# ceph orch apply -i hosts.yaml
Added host 'host02' with addr '10.10.128.69'
Added host 'host03' with addr '10.10.128.70'
Added host 'host04' with addr '10.10.128.71'
View the list of hosts and their labels.
[ceph: root@host01 /]# ceph orch host ls
HOST ADDR LABELS STATUS
host02 host02 mon,osd,mgr
host03 host03 mon,osd,mgr
host04 host04 mon,osd