1、安裝環境
|-----node1(mon,osd) sda為系統盤,sdb和sdc為osd盤
|
|-----node2(mon,osd) sda為系統盤,sdb和sdc為osd盤
admin-----|
|-----node3(mon,osd) sda為系統盤,sdb和sdc為osd盤
|
|-----client
Ceph Monitors 之間默認使用 6789 端口通信, OSD 之間默認用 6800:7300 這個范圍內的端口通信
2、准備工作(所有節點)
2.1、修改IP地址
vim /etc/sysconfig/network-scripts/ifcfg-em1
IPADDR=192.168.130.205
NETMASK=255.255.255.0
GATEWAY=192.168.130.2
2.2、關閉防火牆
systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall開機啟動
firewall-cmd --state #查看防火牆狀態
2.3、修改yum源
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.bk
wget http://mirrors.163.com/.help/CentOS6-Base-163.repo
yum makecache
2.4、修改時區
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
yum -y install ntp
systemctl enable ntpd
systemctl start ntpd
ntpstat
2.5、修改hosts
vim /etc/hosts
192.168.130.205 admin
192.168.130.204 client
192.168.130.203 node3
192.168.130.202 node2
192.168.130.201 node1
2.6、安裝epel倉庫、添加yum ceph倉庫、更新軟件庫
安裝epel倉庫
rpm -vih http://mirrors.sohu.com/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
添加yum ceph倉庫
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-hammer/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
2.7、安裝ceph-deploy,ceph(ceph所有節點都安裝,ceph-deploy只需admin節點安裝)
yum -y update && yum -y install --release hammer ceph ceph-deploy
3、允許無密碼 SSH 登錄(admin節點)
3.1、生成SSH密鑰對,提示 “Enter passphrase” 時,直接回車,口令即為空:
ssh-keygen
3.2、把公鑰拷貝至所有節點
ssh-copy-id root@node1
ssh-copy-id root@node2
ssh-copy-id root@node3
ssh-copy-id root@client
3.3、驗證是否可以無密碼SSH登錄
ssh node1
ssh node2
ssh node3
ssh client
4、創建Monitor(admin節點)
4.1、在node1、node2、node3上創建monitor
mkdir myceph
cd myceph
ceph-deploy new node1 node2 node3
4.2、修改osd的副本數,將osd pool default size = 2添加至末尾
vim /etc/ceph.conf
osd pool default size = 2
4.3、配置初始 monitor(s)、並收集所有密鑰
ceph-deploy mon create-initial
5、創建OSD(admin節點)
5.1列舉磁盤
ceph-deploy disk list node1
ceph-deploy disk list node2
5.2、擦淨磁盤
ceph-deploy disk zap node1:sdb
ceph-deploy disk zap node1:sdc
ceph-deploy disk zap node2:sdb
ceph-deploy disk zap node2:sdc
ceph-deploy disk zap node3:sdb
ceph-deploy disk zap node3:sdc
5.3、准備OSD
ceph-deploy osd prepare node1:sdb
ceph-deploy osd prepare node1:sdc
ceph-deploy osd prepare node2:sdb
ceph-deploy osd prepare node2:sdc
ceph-deploy osd prepare node3:sdb
ceph-deploy osd prepare node3:sdc
ceph-deploy osd activate node1:sdb1
ceph-deploy osd activate node1:sdc1
ceph-deploy osd activate node2:sdb1
ceph-deploy osd activate node2:sdc1
ceph-deploy osd activate node3:sdb1
ceph-deploy osd activate node3:sdc1
5.4、刪除OSD
ceph osd out osd.3
ssh node1 service ceph stop osd.3
ceph osd crush remove osd.3
ceph auth del osd.3 //從認證中刪除
ceph osd rm 3 //刪除
5.5、把配置文件和 admin 密鑰拷貝到各節點,這樣每次執行 Ceph 命令行時就無需指定 monitor 地址和 ceph.client.admin.keyring
ceph-deploy admin admin node1 node2 node3
5.6、查看集群健康狀況
ceph health
6、配置塊設備(client節點)
6.1、創建映像
rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
rbd create foo --size 4096 -m node1 -k /etc/ceph/ceph.client.admin.keyring
6.2、將映像映射為塊設備
sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
sudo rbd map foo --name client.admin -m node1 -k /etc/ceph/ceph.client.admin.keyring
6.3、創建文件系統
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
6.4、掛載文件系統
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
cd /mnt/ceph-block-device
1、列出存儲池
ceph osd lspools
2、創建存儲池
ceph osd pool create pool-name pg-num pgp-num
ceph osd pool create test 512 512
3、刪除存儲池
ceph osd pool delete test test --yes-i-really-really-mean-it
4、重命名存儲池
ceph osd pool rename current-pool-name new-pool-name
ceph osd pool rename test test2
5、查看存儲池統計信息
rados df
6、調整存儲池選項值
ceph osd pool set test size 3 設置對象副本數
7、獲取存儲池選項值
ceph osd pool get test size 獲取對象副本數
1、創建塊設備映像
rbd create --size {megabytes} {pool-name}/{image-name}
rbd create --size 1024 test/foo
2、羅列塊設備映像
rbd ls
3、檢索映像信息
rbd info {image-name}
rbd info foo
rbd info {pool-name}/{image-name}
rbd info test/foo
4、調整塊設備映像大小
rbd resize --size 512 test/foo --allow-shrink 調小
rbd resize --size 4096 test/foo 調大
5、刪除塊設備
rbd rm test/foo
內核模塊操作
1、映射塊設備
sudo rbd map {pool-name}/{image-name} --id {user-name}
sudo rbd map test/foo2 --id admin
如若啟用cephx認證,還需指定密鑰
sudo rbd map test/foo2 --id admin --keyring /etc/ceph/ceph.client.admin.keyring
2、查看已映射設備
rbd showmapped
3、取消塊設備映射
sudo rbd unmap /dev/rbd/{poolname}/{imagename}
rbd unmap /dev/rbd/test/foo2
http://xxxxxx/Linuxjc/1134241.html TechArticle