Pengantar

Setelah sebelumnya kita melakukan installasi OpenStack dengan PackStack. Dalam skenario kali ini kita akan menggantikan disk space yang digunakan oleh, block storage dan image stock dengan Ceph.

Ceph (diucapkan /ˈsɛf/ atau /ˈkɛf/ ) adalah perangkat lunak sumber terbuka penyimpanan terdistribusi yang berbasis penyimpanan objek pada suatu kluster komputer. Ceph menyediakan antarmuka penyimpanan dengan level objek, blok- dan berkas. Tujuan utama Ceph adalah menyediakan penyimpanan terdistribusi tanpa satu titik kegagalan, dapat ditingkatkan hingga skala exabyte, dan tersedia secara bebas. https://id.wikipedia.org/wiki/Ceph_(perangkat_lunak)

Pastikan OpenStack sudah berjalan dengan lancar, tidak ada kendala baik didalam networkingnya maupun instances, dan lain-lain.

Topology

                                                          +--------------------------+
                                                          |                          |
                                                          |                          |
                                                          |   - Compute (Nova)       |
                                +-----------------------> |                          +------------+
                                |                         |                          |            |
+---------------------------+   |                         |                          |            |
|                           |   |                         |                          |            |
| - Controller              |   |                         +--------------------------+            |
| | Compute (Nova)          | +-^                                                                 |
| | Images Storage (glance) | |                                                                   |
| - Block Storage (cinder)  | |                           +---------------------------+           |
|                           | +-------------------------> |                           | <---------+
|                           | |                           |  ceph mon atau ceph1      |           |
+---------------------------+ |                           |                           |           |
                              |                           +---------------------------+           |
                              |                           +---------------------------+           |
                              |                           |                           |           |
                              +-------------------------> |  osd0                     | <---------+
                              |                           |                           |           |
                              |                           +---------------------------+           |
                              |                           +---------------------------+           |
                              |                           |                           |           |
                              +-------------------------> |  osd1                     | <---------+
                                                          |                           |
                                                          +---------------------------+


10.10.2.205 ceph-mon ceph1
10.10.2.206 osd0
10.10.2.207 osd1
10.10.2.205 mds0
10.10.2.204 controller compute0 labopenstack.local
10.10.2.202 compute1 compute compute.localdomain

Disk ceph-mon:

[ceph-mon][INFO  ] Disk /dev/sda: 34.4 GB, 34359738368 bytes, 67108864 sectors
[ceph-mon][INFO  ] Disk /dev/sdb: 68.7 GB, 68719476736 bytes, 134217728 sectors
[ceph-mon][INFO  ] Disk /dev/sdc: 68.7 GB, 68719476736 bytes, 134217728 sectors
[ceph-mon][INFO  ] Disk /dev/sdd: 68.7 GB, 68719476736 bytes, 134217728 sectors

Disk osd0:

[osd0][INFO  ] Disk /dev/sdb: 68.7 GB, 68719476736 bytes, 134217728 sectors
[osd0][INFO  ] Disk /dev/sda: 34.4 GB, 34359738368 bytes, 67108864 sectors
[osd0][INFO  ] Disk /dev/sdc: 68.7 GB, 68719476736 bytes, 134217728 sectors

Disk osd1:

[osd1][INFO  ] Disk /dev/sda: 34.4 GB, 34359738368 bytes, 67108864 sectors
[osd1][INFO  ] Disk /dev/sdb: 68.7 GB, 68719476736 bytes, 134217728 sectors

Requierements

  • siapkan kopi lampung dulu :D , langkah lumayan panjang soalnya :D
  • openstack yg sudah running well.
  • untuk kebutuhan ini, kita butuh ceph server paling tidak 1 atau 2 server dengan minimal ram 8GB.
  • matikan terlebih dahulu firewall dan network manager:

jalankan dengan perintah:

    sudo systemctl disable firewalld
    sudo systemctl stop firewalld
    sudo systemctl disable NetworkManager
    sudo systemctl stop NetworkManager
    sudo systemctl enable network
    sudo systemctl start network

Installasi CEPH Cluster

  • install ceph repo dan update, lakukan di mesin ceph-mon

    yum install epel-release
    yum install wget -y
    yum -y install vim screen crudini
    yum install epel-release yum-plugin-priorities https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
    yum install https://download.ceph.com/rpm-mimic/el7/noarch/ceph-deploy-2.0.1-0.noarch.rpm
    yum repolist
    yum -y update
    
  • install ntp, lakukan di mesin ceph-mon

    yum -y install chrony
    systemctl enable chronyd.service
    systemctl restart chronyd.service
    systemctl status chronyd.service
    
  • create sudoer untuk user stack. user ini digunakan untuk manajemen ceph. lakukan di mesin ceph-mon, osd0, osd1

    cat << EOF >/etc/sudoers.d/stack
    stack ALL = (root) NOPASSWD:ALL
    Defaults:stack !requiretty
    EOF
    
    useradd -d /home/stack -m stack
    passwd stack
    chmod 0440 /etc/sudoers.d/stack
    setenforce 0
    getenforce
    sed -i 's/SELINUX\=enforcing/SELINUX\=permissive/g' /etc/selinux/config
    cat /etc/selinux/config
    
  • setup auto ssh

setup auto ssh berjalan dari mesin ceph-mon ke osd0 ataupun osd1 dan mesin ceph-mon sendiri.

lakukan di mesin ceph-mon

[stack@ceph1 ~]$ cat .ssh/config 
Host ceph1
   Hostname ceph1
   User stack
Host osd0
   Hostname osd0
   User stack
Host ceph-osd1
   Hostname osd1
   User stack
[stack@ceph1 ~]$

copy public key:

ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.2.205
ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.2.206
ssh-copy-id -i ~/.ssh/id_rsa.pub 10.10.2.207
  • Deploy Ceph

lakukan di mesin ceph-mon sebagai user stack

mkdir os-ceph
cd os-ceph/
ceph-deploy new ceph-mon

note: pada saat percobaan ini menggunakan host ceph1 daripada host ceph-mon. Jika ada penulisan dengan host ceph-mon itu artinya mesin ceph1. Intinya kedua host tersebut mengarah ke mesin yang sama.

Set jumlah replica 2 (sehingga data yang tersimpan pada cluster ceph akan di replica sebanyak 2)

echo "osd pool default size = 2" >> ceph.conf
echo "osd pool default min size = 1" >> ceph.conf
echo "osd crush chooseleaf type = 1" >> ceph.conf
echo "osd journal size  = 100" >> ceph.conf

Install ceph mengunakan ceph-deploy:

ceph-deploy install ceph-mon
ceph-deploy install osd0 osd1

Membuat initial monitor:

ceph-deploy mon create-initial
ceph-deploy mon create

Melihat disk list di mesin osd0

[stack@ceph1 osceph2]$ ceph-deploy disk list osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/stack/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk list osd0
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f610f1773f8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['osd0']
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f610f3c9938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[osd0][DEBUG ] connection detected need for sudo
[osd0][DEBUG ] connected to host: osd0 
[osd0][DEBUG ] detect platform information from remote host
[osd0][DEBUG ] detect machine type
[osd0][DEBUG ] find the location of an executable
[osd0][INFO  ] Running command: sudo fdisk -l
[osd0][INFO  ] Disk /dev/sda: 34.4 GB, 34359738368 bytes, 67108864 sectors
[osd0][INFO  ] Disk /dev/mapper/centos-root: 31.1 GB, 31130124288 bytes, 60801024 sectors
[osd0][INFO  ] Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
[osd0][INFO  ] Disk /dev/sdb: 68.7 GB, 68719476736 bytes, 134217728 sectors
[osd0][INFO  ] Disk /dev/sdc: 68.7 GB, 68719476736 bytes, 134217728 sectors
[osd0][INFO  ] Disk /dev/mapper/ceph--455f828c--2454--4f1d--b6f3--887fa8b48e58-osd--block--b1f537f7--8b40--490e--b872--98b84f5f0118: 68.7 GB, 68715282432 bytes, 134209536 sectors
[osd0][INFO  ] Disk /dev/mapper/ceph--e389bab8--4765--4fd3--868e--26bfb019dc7b-osd--block--087e2243--02c1--4573--afc8--e642fff2afb4: 68.7 GB, 68715282432 bytes, 134209536 sectors

Format multiple disk ceph lewat ceph-deploy:

jalankan ceph-deploy disk zap osd1 /dev/sdc /dev/sdd

[stack@ceph1 ~]$ ceph-deploy disk zap osd1 /dev/sdc /dev/sdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/stack/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy disk zap osd1 /dev/sdc /dev/sdd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3d5e9323f8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : osd1
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f3d5eb84938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdc', '/dev/sdd']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on osd1
[osd1][DEBUG ] connection detected need for sudo
[osd1][DEBUG ] connected to host: osd1 
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[osd1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[osd1][DEBUG ] zeroing last few blocks of device
[osd1][DEBUG ] find the location of an executable
[osd1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdc
[osd1][DEBUG ] --> Zapping: /dev/sdc
[osd1][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[osd1][DEBUG ] Running command: /usr/sbin/wipefs --all /dev/sdc
[osd1][DEBUG ] Running command: /bin/dd if=/dev/zero of=/dev/sdc bs=1M count=10
[osd1][DEBUG ]  stderr: 10+0 records in
[osd1][DEBUG ] 10+0 records out
[osd1][DEBUG ] 10485760 bytes (10 MB) copied
[osd1][DEBUG ]  stderr: , 0.010268 s, 1.0 GB/s
[osd1][DEBUG ] --> Zapping successful for: <Raw Device: /dev/sdc>
[ceph_deploy.osd][DEBUG ] zapping /dev/sdd on osd1
[osd1][DEBUG ] connection detected need for sudo
[osd1][DEBUG ] connected to host: osd1
[osd1][DEBUG ] detect platform information from remote host
[osd1][DEBUG ] detect machine type
[osd1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[osd1][DEBUG ] zeroing last few blocks of device
[osd1][DEBUG ] find the location of an executable
[osd1][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm zap /dev/sdd
[osd1][DEBUG ] --> Zapping: /dev/sdd
[osd1][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[osd1][DEBUG ] Running command: /usr/sbin/wipefs --all /dev/sdd
[osd1][DEBUG ] Running command: /bin/dd if=/dev/zero of=/dev/sdd bs=1M count=10
[osd1][DEBUG ]  stderr: 10+0 records in
[osd1][DEBUG ] 10+0 records out
[osd1][DEBUG ] 10485760 bytes (10 MB) copied
[osd1][DEBUG ]  stderr: , 0.00494895 s, 2.1 GB/s
[osd1][DEBUG ] --> Zapping successful for: <Raw Device: /dev/sdd>

lakukan hal sama pada host ceph-mon dan osd0 sesuai dengan disk yang dimiliki.

Membuat OSD dengan ceph-deploy:

for a in /dev/sdc /dev/sdd;do ceph-deploy osd create --data $a osd1`;done

lakukan hal sama pada host ceph-mon dan osd0 sesuai dengan disk yang dimiliki.

Copy konfigurasi dan key ke semua node:

ceph-deploy admin ceph1 osd0 osd1
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

cek ceph status:

[stack@ceph1 osceph2]$ ceph -s
  cluster:
    id:     10925d88-4e51-4311-88aa-52c81ab14eb6
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 1 daemons, quorum ceph1
    mgr: no daemons active
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   0 pools, 0 pgs   
    objects: 0  objects, 0 B  
    usage:   0 B used, 0 B / 0 B avail
    pgs:

no active mgr, untuk mengaktifkannya:

[stack@ceph1 osceph2]$ ceph-deploy mgr create ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/stack/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mgr create ceph1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
.....
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph1
[ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph1][WARNIN] mgr keyring does not exist yet, creating one
[ceph1][DEBUG ] create a keyring file
[ceph1][DEBUG ] create path recursively if it doesn't exist
[ceph1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph1/keyring
[ceph1][INFO  ] Running command: sudo systemctl enable ceph-mgr@ceph1
[ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph1][INFO  ] Running command: sudo systemctl start ceph-mgr@ceph1
[ceph1][INFO  ] Running command: sudo systemctl enable ceph.target

lalu cek dengan command:

sudo systemctl status ceph-mgr@ceph1

Test create pool:

ceph osd pool create volumes 128
ceph osd pool create images 128
ceph osd pool create backups 128
ceph osd pool create vms 128
rbd pool init volumes
rbd pool init images
rbd pool init backups
rbd pool init vms

Cek status:

[stack@ceph1 osceph2]$ ceph -s
  cluster:
    id:     10925d88-4e51-4311-88aa-52c81ab14eb6
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph1
    mgr: ceph1(active)
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   6.0 GiB used, 378 GiB / 384 GiB avail
    pgs:

Integrasi Ceph Cluster dengan OpenStack Rocky

Install requierements

masuk ke dalam user root di mesin ceph1

ssh controller sudo yum -y install python-rbd ceph-common
ssh compute1 sudo yum -y install python-rbd ceph-common
cat /etc/ceph/ceph.conf | ssh controller sudo tee /etc/ceph/ceph.conf
cat /etc/ceph/ceph.conf | ssh compute1 sudo tee /etc/ceph/ceph.conf

masuk ke dalam user stack

setup client authentication untuk service cinder:

[stack@ceph1 osceph2]$ ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
        key = AQAmtCJdV85NHBAAhPX88nr0pVPZP6+34IPQhw==
[stack@ceph1 osceph2]$

setup client authentication untuk service glance:

[stack@ceph1 osceph2]$ ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQDDtCJd4aiPLhAALdSwrLdxizoZWZKs2nnBGg==
[stack@ceph1 osceph2]$

Menambahkan key untuk client.cinder dan client.glance ke masing-masing nodes dan mengganti ownershipnya:

glance@controller:

[stack@ceph1 osceph2]$ ceph auth get-or-create client.glance | ssh root@controller sudo tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
        key = AQDDtCJd4aiPLhAALdSwrLdxizoZWZKs2nnBGg==
[stack@ceph1 osceph2]$ 
[stack@ceph1 osceph2]$ ssh root@controller sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
[stack@ceph1 osceph2]$

cinder@controller:

[stack@ceph1 osceph2]$ ceph auth get-or-create client.cinder | ssh root@controller sudo tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
        key = AQAmtCJdV85NHBAAhPX88nr0pVPZP6+34IPQhw==
[stack@ceph1 osceph2]$ ssh root@controller sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
[stack@ceph1 osceph2]$

glance@compute:

[stack@ceph1 osceph2]$ ceph auth get-or-create client.glance | ssh root@compute1 sudo tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
        key = AQDDtCJd4aiPLhAALdSwrLdxizoZWZKs2nnBGg==
[stack@ceph1 osceph2]$

cinder@compute:

[stack@ceph1 osceph2]$ ceph auth get-or-create client.cinder | ssh root@compute1 sudo tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
        key = AQAmtCJdV85NHBAAhPX88nr0pVPZP6+34IPQhw==
[stack@ceph1 osceph2]$

copy cinder key:

[stack@ceph1 osceph2]$ ceph auth get-key client.cinder | ssh root@controller tee client.cinder.key
AQAmtCJdV85NHBAAhPX88nr0pVPZP6+34IPQhw==[stack@ceph1 osceph2]$ 
[stack@ceph1 osceph2]$

[stack@ceph1 osceph2]$ ceph auth get-key client.cinder | ssh root@compute1 tee client.cinder.key
AQAmtCJdV85NHBAAhPX88nr0pVPZP6+34IPQhw==[stack@ceph1 osceph2]$ 
[stack@ceph1 osceph2]$

Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute later

[root@compute ~]# uuidgen > uuid-secret.txt
[root@compute ~]# cat uuid-secret.txt 
eba93c91-5641-4890-aad7-42606c3c3e66

Then, on the compute nodes, add the secret key to libvirt:

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>`cat uuid-secret.txt`</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF

go:

[root@compute ~]# cat > secret.xml <<EOF
> <secret ephemeral='no' private='no'>
>   <uuid>`cat uuid-secret.txt`</uuid>
>   <usage type='ceph'>
>     <name>client.cinder secret</name>
>   </usage>
> </secret>
> EOF



[root@compute ~]# virsh secret-define --file secret.xml
Secret eba93c91-5641-4890-aad7-42606c3c3e66 created
[root@compute ~]# virsh secret-set-value --secret $(cat uuid-secret.txt) --base64 $(cat client.cinder.key)
Secret value set
[root@compute ~]#

Jika compute node ada di controller, maka perlu dilakukan juga.

Setup Cinder

Cinder digunakan untuk menyimpan disk dari vm yg ada di openstack.

masuk ke mesin controller, edit file /etc/cinder/cinder.conf lalu tambahkan ke section [DEFAULT] dan aktifkan Ceph sebagai backend dari cinder.

enabled_backends = ceph
glance_api_version = 2

tambahkan section ceph:

[ceph]
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_pool = volumes
rbd_user = cinder
glance_api_version = 2
rbd_secret_uuid = eba93c91-5641-4890-aad7-42606c3c3e66
volume_backend_name = ceph
rbd_cluster_name = ceph

rbd_secret_uuid sesuaikan dengan hasil uuidgen.

hasil akhir cinder.conf:

[DEFAULT]
backup_swift_url=http://10.10.2.204:8080/v1/AUTH_
backup_swift_container=volumebackups
backup_driver=cinder.backup.drivers.swift
enable_v3_api=True
storage_availability_zone=nova
default_availability_zone=nova
#default_volume_type=iscsi

default_volume_type=ceph
auth_strategy=keystone
enabled_backends=ceph
glance_api_version = 2
osapi_volume_listen=0.0.0.0
osapi_volume_workers=4
debug=False
log_dir=/var/log/cinder
transport_url=rabbit://guest:guest@10.10.2.204:5672/
control_exchange=openstack
api_paste_config=/etc/cinder/api-paste.ini
glance_host=10.10.2.204

[backend]

[backend_defaults]

[barbican]

[brcd_fabric_example]

[cisco_fabric_example]

[coordination]

[cors]

[database]

connection=mysql+pymysql://cinder:1c36661a5efa4c2b@10.10.2.204/cinder

[fc-zone-manager]

[healthcheck]

[key_manager]

backend=cinder.keymgr.conf_key_mgr.ConfKeyManager

[keystone_authtoken]

www_authenticate_uri=http://10.10.2.204:5000/

auth_uri=http://10.10.2.204:5000/

auth_type=password

auth_url=http://10.10.2.204:35357
username=cinder
password=e3f3c81d1b41459b
user_domain_name=Default
project_name=services
project_domain_name=Default

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path=/var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

driver=messagingv2

[oslo_messaging_rabbit]

ssl=False

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

policy_file=/etc/cinder/policy.json

[oslo_reports]

[oslo_versionedobjects]

[profiler]

[sample_remote_file_source]

[service_user]

[ssl]

[vault]
#[lvm]
#volume_backend_name=lvm
#volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
#iscsi_ip_address=10.10.2.204
#iscsi_helper=lioadm
#volume_group=cinder-volumes
#volumes_dir=/var/lib/cinder/volumes

[ceph]
##volume_driver = cinder.volume.drivers.rbd.RBDDriver
##rbd_pool = volumes
##rbd_user = cinder
##rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_pool = volumes
rbd_user = cinder
glance_api_version = 2
rbd_secret_uuid = eba93c91-5641-4890-aad7-42606c3c3e66
volume_backend_name = ceph
rbd_cluster_name = ceph

#volume_driver = cinder.volume.drivers.rbd.RBDDriver
#rbd_pool = volumes
#rbd_ceph_conf = /etc/ceph/ceph.conf
#rbd_flatten_volume_from_snapshot = false
#rbd_max_clone_depth = 5
#rbd_store_chunk_size = 4
#rados_connect_timeout = -1
#rbd_user = cinder
#rbd_secret_uuid = ca405f73-e2c9-40aa-ab67-34cf47f7caf9

Restart Service Cinder

restart dengan perintah:

[root@controller ~(keystone_admin)]# service openstack-cinder-api restart
Redirecting to /bin/systemctl restart openstack-cinder-api.service
[root@controller ~(keystone_admin)]# service openstack-cinder-scheduler restart
Redirecting to /bin/systemctl restart openstack-cinder-scheduler.service
[root@controller ~(keystone_admin)]# service openstack-cinder-volume restart
Redirecting to /bin/systemctl restart openstack-cinder-volume.service
[root@controller ~(keystone_admin)]#

lalu test dengan perintah rbd --id cinder ls volumes

[root@controller ~(keystone_admin)]#  rbd --id cinder ls volumes
2019-07-18 10:49:20.216 7f8210d64b00 -1 asok(0x55b893fd64f0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.cinder.28025.94251245209344.asok': (2) No such file or directory
volume-23d36d36-5969-49e2-a542-9f6350359b38
volume-8ec1920f-b3fe-4ce8-9a45-456645ac480c
volume-8f7c693b-a32f-4c7e-9cf8-cc6b490687f8
volume-9ffa5128-31d6-4ae9-a668-949ac49b5255
volume-a711ee9e-418a-4b39-a661-d7a5990270d5
volume-b3e0396b-b346-42c7-9ff3-1058e839f870
volume-dd948a95-6d6a-4624-a000-1949e4a915a4
volume-f9a9bb00-6f5a-4f57-ae16-3a6368d9ec5c
[root@controller ~(keystone_admin)]#

Test Create volume:

[root@controller ~(keystone_admin)]# cinder create --volume-type ceph --display-name testCephVolTut 5
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-07-18T03:51:11.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 4897b2b4-c9eb-4c40-b3e8-f0fb75fed592 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | testCephVolTut                       |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | b0a4a8851d114a3f9bf4265c9e4c5a9c     |
| replication_status             | None                                 |
| size                           | 5                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | dfba230d76a8486287912da65d815769     |
| volume_type                    | ceph                                 |
+--------------------------------+--------------------------------------+
[root@controller ~(keystone_admin)]#

lihat list volume:

[root@controller ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name           | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+
| 23d36d36-5969-49e2-a542-9f6350359b38 | available | testCephVold   | 5    | ceph        | false    |                                      |
| 4897b2b4-c9eb-4c40-b3e8-f0fb75fed592 | available | testCephVolTut | 5    | ceph        | false    |                                      |
| 88a95ab2-7f9b-4367-bee0-211926ab2bcc | error     | testaja        | 2    | ceph        | false    |                                      |
| 8ec1920f-b3fe-4ce8-9a45-456645ac480c | in-use    |                | 40   | ceph        | true     | 45644d8c-4cf3-4faf-95c7-02a8bc9095c2 |
| 8f7c693b-a32f-4c7e-9cf8-cc6b490687f8 | available | test           | 1    | ceph        | false    |                                      |
| 9ffa5128-31d6-4ae9-a668-949ac49b5255 | available | testCephVasdol | 5    | ceph        | false    |                                      |
| a711ee9e-418a-4b39-a661-d7a5990270d5 | available | testCephVol    | 5    | ceph        | false    |                                      |
| b3e0396b-b346-42c7-9ff3-1058e839f870 | available | cephvol        | 2    | ceph        | false    |                                      |
| dd948a95-6d6a-4624-a000-1949e4a915a4 | in-use    |                | 10   | ceph        | true     | bac4ed05-4c5d-458f-b6ec-6276d54b52e5 |
| f9a9bb00-6f5a-4f57-ae16-3a6368d9ec5c | in-use    | freebsd        | 40   | ceph        | true     | 0736390f-3f9e-4ebb-a08e-05d9b0611703 |
+--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+
[root@controller ~(keystone_admin)]# openstack volume list
+--------------------------------------+----------------+-----------+------+---------------------------------+
| ID                                   | Name           | Status    | Size | Attached to                     |
+--------------------------------------+----------------+-----------+------+---------------------------------+
| 4897b2b4-c9eb-4c40-b3e8-f0fb75fed592 | testCephVolTut | available |    5 |                                 |
| 8ec1920f-b3fe-4ce8-9a45-456645ac480c |                | in-use    |   40 | Attached to centos on /dev/vda  |
| dd948a95-6d6a-4624-a000-1949e4a915a4 |                | in-use    |   10 | Attached to cirros on /dev/vda  |
| f9a9bb00-6f5a-4f57-ae16-3a6368d9ec5c | freebsd        | in-use    |   40 | Attached to fbsd on /dev/vda    |
| 23d36d36-5969-49e2-a542-9f6350359b38 | testCephVold   | available |    5 |                                 |
| 9ffa5128-31d6-4ae9-a668-949ac49b5255 | testCephVasdol | available |    5 |                                 |
| 88a95ab2-7f9b-4367-bee0-211926ab2bcc | testaja        | error     |    2 |                                 |
| a711ee9e-418a-4b39-a661-d7a5990270d5 | testCephVol    | available |    5 |                                 |
| b3e0396b-b346-42c7-9ff3-1058e839f870 | cephvol        | available |    2 |                                 |
| 8f7c693b-a32f-4c7e-9cf8-cc6b490687f8 | test           | available |    1 |                                 |
+--------------------------------------+----------------+-----------+------+---------------------------------+
[root@controller ~(keystone_admin)]#

lihat service-list status dari cinder:

[root@controller ~(keystone_admin)]# cinder service-list
+------------------+-----------------+------+----------+-------+----------------------------+-----------------+
| Binary           | Host            | Zone | Status   | State | Updated_at                 | Disabled Reason |
+------------------+-----------------+------+----------+-------+----------------------------+-----------------+
| cinder-backup    | controller      | nova | enabled  | down  | 2019-07-12T07:02:01.000000 | -               |
| cinder-scheduler | controller      | nova | enabled  | up    | 2019-07-18T03:54:33.000000 | -               |
| cinder-volume    | controller@ceph | nova | enabled  | up    | 2019-07-18T03:54:36.000000 | -               |
| cinder-volume    | controller@lvm  | nova | disabled | down  | 2019-07-08T07:11:04.000000 | migratekeCeph   |
+------------------+-----------------+------+----------+-------+----------------------------+-----------------+

jika ingin menonaktifkan service-list:

openstack volume service set --disable --disable-reason migratekeCeph controller@lvm cinder-volume

melihat type storage dari cinder:

[root@controller ~(keystone_admin)]# cinder type-list

+--------------------------------------+-------+-------------+-----------+
| ID                                   | Name  | Description | Is_Public |
+--------------------------------------+-------+-------------+-----------+
| 16e4f895-3f64-4a3d-b9be-6accd740a3fc | ceph  | -           | True      |
| 513f6ef6-58af-4db6-97c3-76911c812d55 | iscsi | -           | True      |
+--------------------------------------+-------+-------------+-----------+
[root@controller ~(keystone_admin)]# cinder extra-specs-list
+--------------------------------------+-------+---------------------------------+
| ID                                   | Name  | extra_specs                     |
+--------------------------------------+-------+---------------------------------+
| 16e4f895-3f64-4a3d-b9be-6accd740a3fc | ceph  | {'volume_backend_name': 'ceph'} |
| 513f6ef6-58af-4db6-97c3-76911c812d55 | iscsi | {'volume_backend_name': 'lvm'}  |
+--------------------------------------+-------+---------------------------------+

set ceph sebagai default backend:

openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph

Setup Glance

Glance digunakan untuk menyimpan iso ataupun images yang akan digunakan vm di OpenStack.

edit file /etc/glance/glance-api.conf dan tambahkan:

stores=rbd,file,http,swift
default_store=rbd
##file
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf

hasil akhir glance-api.conf:

[DEFAULT]
bind_host=0.0.0.0
bind_port=9292
workers=4
image_cache_dir=/var/lib/glance/image-cache
registry_host=0.0.0.0
debug=False
log_file=/var/log/glance/api.log
log_dir=/var/log/glance
transport_url=rabbit://guest:guest@10.10.2.204:5672/
enable_v1_api=False

[cors]
[database]
connection=mysql+pymysql://glance:4d819a8b65594569@10.10.2.204/glance

[glance_store]

stores=rbd,file,http,swift

default_store=rbd
##file

rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf

filesystem_store_datadir=/var/lib/glance/images/

os_region_name=RegionOne

[image_format]

[keystone_authtoken]

www_authenticate_uri=http://10.10.2.204:5000/v3

auth_uri=http://10.10.2.204:5000/v3

auth_type=password

auth_url=http://10.10.2.204:35357
username=glance
password=21de7a56246541aa
user_domain_name=Default
project_name=services
project_domain_name=Default

[matchmaker_redis]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_kafka]
[oslo_messaging_notifications]

driver=messagingv2

topics=notifications

[oslo_messaging_rabbit]

ssl=False

default_notification_exchange=glance

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

policy_file=/etc/glance/policy.json

[paste_deploy]

flavor=keystone

[profiler]

[store_type_location_strategy]

[task]

[taskflow_executor]

restart service glance

service openstack-glance-api restart
service gpenstack-glance-registry restart

check images lewat rbd

[root@controller ~(keystone_admin)]# rbd --id glance ls images
2019-07-18 13:29:24.350 7f4ebe2f1b00 -1 asok(0x55d34ebfb4f0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.glance.24598.94366047655680.asok': (2) No such file or directory
5759f626-32c2-45c7-b2e4-6972f606ae68
9991376d-b436-437a-b76a-e9e68bb6e8e4
ca03be4c-d5ce-4a1a-b0a0-d769edd6b703
[root@controller ~(keystone_admin)]#

check image list:

[root@controller ~(keystone_admin)]# glance image-list
+--------------------------------------+---------------------+
| ID                                   | Name                |
+--------------------------------------+---------------------+
| 5759f626-32c2-45c7-b2e4-6972f606ae68 | CentOS-7-x86_64     |
| 9991376d-b436-437a-b76a-e9e68bb6e8e4 | Cirros              |
| ca03be4c-d5ce-4a1a-b0a0-d769edd6b703 | FreeBSD-11.2-stable |
+--------------------------------------+---------------------+

Jika masih kosong, bisa diisi sendiri:

wget -c https://download.freebsd.org/ftp/snapshots/VM-IMAGES/11.2-STABLE/amd64/Latest/FreeBSD-11.2-STABLE-amd64.qcow2.xz
tar -xvf FreeBSD-11.2-STABLE-amd64.qcow2.xz
openstack image create --container-format bare --disk-format qcow2 --file FreeBSD-11.2-STABLE-amd64.qcow2 --public FreeBSD-11.2-stable

lalu check lagi:

[root@controller ~(keystone_admin)]# openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 5759f626-32c2-45c7-b2e4-6972f606ae68 | CentOS-7-x86_64     | active |
| 9991376d-b436-437a-b76a-e9e68bb6e8e4 | Cirros              | active |
| ca03be4c-d5ce-4a1a-b0a0-d769edd6b703 | FreeBSD-11.2-stable | active |
+--------------------------------------+---------------------+--------+
[root@controller ~(keystone_admin)]#

coba test dengan centos image;

wget -c "http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz"
unxz CentOS-7-x86_64-GenericCloud.qcow2.xz

create image lagi:

[root@controller ~(keystone_admin)]#  openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 --public CentOS-7-Test

lalu check:

[root@controller ~(keystone_admin)]# openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 0d9d18de-df05-46b0-96fc-10c9e1e8cc8a | CentOS-7-Test       | active |
| 5759f626-32c2-45c7-b2e4-6972f606ae68 | CentOS-7-x86_64     | active |
| 9991376d-b436-437a-b76a-e9e68bb6e8e4 | Cirros              | active |
| ca03be4c-d5ce-4a1a-b0a0-d769edd6b703 | FreeBSD-11.2-stable | active |
+--------------------------------------+---------------------+--------+
[root@controller ~(keystone_admin)]#

ok done.

Nova compute

Disetiap node nova-compute edit file /etc/ceph/ceph.conf dan tambahkan:

[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd concurrent management ops = 20
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/qemu-guest-$pid.log

hasil akhir ceph.conf:

[global]
fsid = ac9b148c-e413-48ae-8adc-93a5cca6e88a
mon_initial_members = ceph1
mon_host = 10.10.2.205
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd pool default size = 2
osd pool default min size = 1
osd crush chooseleaf type = 1
osd journal size  = 100

[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd concurrent management ops = 20
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/qemu-guest-$pid.log

restart nova services

systemctl restart openstack-nova-compute

check hypervisor list:

[root@controller ~(keystone_admin)]# nova hypervisor-list

+--------------------------------------+---------------------+-------+---------+
| ID                                   | Hypervisor hostname | State | Status  |
+--------------------------------------+---------------------+-------+---------+
| 62391ec9-6b71-4b00-be98-1d89bee17129 | controller          | up    | enabled |
| a3a253c0-10fe-464a-a77c-f85de9ff3720 | compute1            | up    | enabled |
+--------------------------------------+---------------------+-------+---------+

via openstack cli

[root@controller ~(keystone_admin)]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  3 | nova-conductor   | controller | internal | enabled | up    | 2019-07-18T06:55:09.000000 |
|  5 | nova-scheduler   | controller | internal | enabled | up    | 2019-07-18T06:55:08.000000 |
|  7 | nova-consoleauth | controller | internal | enabled | up    | 2019-07-18T06:55:15.000000 |
|  9 | nova-compute     | controller | nova     | enabled | up    | 2019-07-18T06:55:09.000000 |
| 13 | nova-compute     | compute1   | nova     | enabled | up    | 2019-07-18T06:55:17.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

via nova cli:

[root@controller ~(keystone_admin)]# nova service-list
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id                                   | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason | Forced down |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
| b77d04ea-302d-4470-a171-9ccf04c6535b | nova-conductor   | controller | internal | enabled | up    | 2019-07-18T06:57:29.000000 | -               | False       |
| 45a49f72-bce0-4800-b97e-2ec4f6b8ee54 | nova-scheduler   | controller | internal | enabled | up    | 2019-07-18T06:57:38.000000 | -               | False       |
| bbe646df-3028-4e5a-a8bd-577cc6860b44 | nova-consoleauth | controller | internal | enabled | up    | 2019-07-18T06:57:35.000000 | -               | False       |
| 77660dbd-a5ad-47c5-b93f-38e1bcf5045b | nova-compute     | controller | nova     | enabled | up    | 2019-07-18T06:57:29.000000 | -               | False       |
| 98c1c43d-c78c-4445-b36f-91c8ffe4fde5 | nova-compute     | compute1   | nova     | enabled | up    | 2019-07-18T06:57:37.000000 | -               | False       |
+--------------------------------------+------------------+------------+----------+---------+-------+----------------------------+-----------------+-------------+
[root@controller ~(keystone_admin)]#

cara hapus service via nova

nova service-delete 0d770b47-d95d-4ba5-a366-5e5baa29aebd

cara menonaktifkan dan mengaktifkan:

nova service-disable 0d770b47-d95d-4ba5-a366-5e5baa29aebd
nova service-enable 0d770b47-d95d-4ba5-a366-5e5baa29aebd

cara hapus service lewat openstack cli:

openstack compute service delete 12

Note: Jika terjadi perbedaan antara compute node record di host openstack di horizon dengan versi cli, solusinya:

`delete service-list yang bersangkutan, hingga tersisa yg masih running well saja`

lalu discover lagi:

nova-manage discover_hosts

lihat list services:

openstack hypervisor list;nova hypervisor-list;nova service-list;openstack compute service list;

image

done.

Hasil OpenStack yang sudah terkoneksi dengan Ceph Cluster

vol-ceph-stein

image

image

image

Referensi:

  1. https://gist.github.com/zhanghui9700/9874686
  2. https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/ceph_block_device_to_openstack_guide/configuring_openstack_to_use_ceph
  3. http://docs.ceph.com/docs/master/rbd/rbd-openstack/
  4. https://ask.openstack.org/en/question/119889/openstack-compute-node-not-recognized-as-hypervisor/