Friday, September 9, 2016

Minimun Installation Steps to Ceph Jewel and Plaing with It

Cephjewelinstall

Minimun Installation Steps to Ceph Jewel and Plaing with It

Our goal is to reduce the installation procudures that can help peolple playing Ceph Jewel immediately.
Hence, we use the minimum installation with single VM and one disk drive to achive the goal.
I do not recommand using OS direcotry as OSD drive, since it's not a usual situation.
So attaching one disk on this VM first.

Let's start to discuss the following topic.

  • Envrironment
  • Installation
  • Result
  • Playing with Ceph Jewel

Envrironment

OS

  • Ubuntu14.04
  • Kernel 3.16.0-30-generic
  • VMware VM

/etc/hosts Setting

root@cephserver:~/mycephfiles# cat /etc/hosts
127.0.0.1   localhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.2.141 cephserver

Data Drive setting

root@cephserver:~/mycephfiles# ls /dev/sd*
sda   sda1  sda2  sda5  sdb

The sdb will be the only one OSD in our Ceph testing bed

Ceph Jewel Installation

We installed the Ceph Jewel by using the following commands.

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
apt-get install ceph-deploy
mkdir mycephfiles
cd mycephfiles
ceph-deploy new cephserver
echo 'osd crush chooseleaf type = 0' >> ceph.conf
echo osd pool default size = 1 >> ceph.conf
ceph-deploy install cephserver
ceph-deploy mon create-initial
mkfs.ext4 /dev/sdb
ceph-deploy osd create cephserver:sdb
ceph-deploy osd activate cephserver:/dev/sdb1
ceph-deploy admin cephserver
chmod +r /etc/ceph/ceph.client.admin.keyring
Ceph -s
ceph osd tree

Result

After installation, we check the Ceph is healthy or not. First of all, check the status of Ceph.

root@cephserver:~# ceph -s
    cluster af0ac66e-5020-4218-926a-66d57895fafd
     health HEALTH_WARN
            too many PGs per OSD (448 > max 300)
     monmap e1: 1 mons at {cephserver=192.168.2.141:6789/0}
            election epoch 5, quorum 0 cephserver
     osdmap e24: 1 osds: 1 up, 1 in
            flags sortbitwise
      pgmap v230: 448 pgs, 4 pools, 3291 MB data, 947 objects
            3333 MB used, 1990 GB / 1994 GB avail
                 448 active+clean

Check the status of all the OSD, it must be the "up" status.

root@cephserver:~# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.94730 root default
-2 1.94730     host cephserver
 0 1.94730         osd.0            up  1.00000          1.00000

As before, we directly activate the /dev/sdb, and now we can see which directory it located.

root@cephserver:~# mount 
.
.
/dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64)

Playing with Ceph Jewel

A big Difference to Other Ceph version

Ceph Jewel is running properly in Kernel greater than 4.4. So, if you kernel is less the 4.4, you will get the errors when you map a image shown as followed.

root@cephserver:~#rbd map pool101/realimage1

rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (6) No such device or address

The solution is disable some features to make it working properly by using the below command.

rbd feature disable pool101/realimage1 deep-flatten fast-diff object-map exclusive-lock
rbd map pool101/realimage1

Playing Pool, Create, Map and Mount

Now we can add some data in the Ceph.

root@cephserver:~# ceph osd pool create pool101 128
root@cephserver:~# rbd create -p pool101 realimage1 --size 102400 --image-format 2
rbd feature disable pool101/realimage1 deep-flatten fast-diff object-map exclusive-lock
root@cephserver:~# rbd map pool101/realimage1
/dev/rbd0
root@cephserver:~# mkfs.ext4 /dev/rbd0
root@cephserver:~# mount /dev/rbd0 /mnt/cephtest/
root@cephserver:~# touch /mnt/cephtest/aa && echo "v1"> /mnt/cephtest/aa

Playing Snapshot

You can snapshot the data, and the snapshot is read only.

root@cephserver:~# rbd snap create pool101/realimage1@snap1
rbd snap protect pool101/realimage1@snap1

Playing Clone and Mount

If you want to write to some snapshot, you have to clone the snapshot. Now you can read and write the data to it.

root@cephserver:~# rbd clone pool101/realimage1@snap1 realimage1snap1clone1
root@cephserver:~#rbd feature disable realimage1snap1clone1 deep-flatten fast-diff object-map exclusive-lock
root@cephserver:~# rbd map realimage1snap1clone1
/dev/rbd1
root@cephserver:~# mount /dev/rbd1 /mnt/cephclone
root@cephserver:~# cat /mnt/cephclone/aa
v1

3 comments:



  1. Well thanks for the information, do not forget to visit my blog too.
    safariors

    ReplyDelete

  2. Amazing Article :) Im extremely affected by your blog.I’m happy that you simply shared this useful info with us.I am quite sure I’ll learn many new stuff right here!
    stylent

    ReplyDelete

  3. Thanks for sharing this quality information with us. I really enjoyed reading. Safariors Guide

    ReplyDelete