一区二区三区三上|欧美在线视频五区|国产午夜无码在线观看视频|亚洲国产裸体网站|无码成年人影视|亚洲AV亚洲AV|成人开心激情五月|欧美性爱内射视频|超碰人人干人人上|一区二区无码三区亚洲人区久久精品

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

請問怎樣使用cephadm部署ceph集群呢?

馬哥Linux運維 ? 來源:51CTO ? 2024-01-16 09:32 ? 次閱讀

一、cephadm介紹

從紅帽ceph5開始使用cephadm代替之前的ceph-ansible作為管理整個集群生命周期的工具,包括部署,管理,監(jiān)控。

cephadm引導(dǎo)過程在單個節(jié)點(bootstrap節(jié)點)上創(chuàng)建一個小型存儲集群,包括一個Ceph Monitor和一個Ceph Manager,以及任何所需的依賴項。

如下圖所示:

c19c5420-b38e-11ee-8b88-92fbcf53809c.jpg

cephadm可以登錄到容器倉庫來拉取ceph鏡像和使用對應(yīng)鏡像來在對應(yīng)ceph節(jié)點進(jìn)行部署。ceph容器鏡像對于部署ceph集群是必須的,因為被部署的ceph容器是基于那些鏡像。

為了和ceph集群節(jié)點通信,cephadm使用ssh。通過使用ssh連接,cephadm可以向集群中添加主機(jī),添加存儲和監(jiān)控那些主機(jī)。

該節(jié)點讓集群up的軟件包就是cepadm,podman或docker,python3和chrony。這個容器化的版本減少了ceph集群部署的復(fù)雜性和依賴性。

1、python3

yum -y install python3

2、podman或者docker來運行容器

# 安裝阿里云提供的docker-ce
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
systemctl enable docker --now
# 配置鏡像加速器
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://bp1bh1ga.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

3、時間同步(比如chrony或者NTP)

二、部署ceph集群前準(zhǔn)備

2.1、節(jié)點準(zhǔn)備

節(jié)點名稱 系統(tǒng) IP地址 ceph角色 硬盤
node1 Rocky Linux release 8.6 172.24.1.6 mon,mgr,服務(wù)器端,管理節(jié)點 /dev/vdb,/dev/vdc/,dev/vdd
node2 Rocky Linux release 8.6 172.24.1.7 mon,mgr /dev/vdb,/dev/vdc/,dev/vdd
node3 Rocky Linux release 8.6 172.24.1.8 mon,mgr /dev/vdb,/dev/vdc/,dev/vdd
node4 Rocky Linux release 8.6 172.24.1.9 客戶端,管理節(jié)點

2.2、修改每個節(jié)點的/etc/host

172.24.1.6 node1
172.24.1.7 node2
172.24.1.8 node3
172.24.1.9 node4

2.3、在node1節(jié)點上做免密登錄

[root@node1 ~]# ssh-keygen
[root@node1 ~]# ssh-copy-id root@node2
[root@node1 ~]# ssh-copy-id root@node3
[root@node1 ~]# ssh-copy-id root@node4

三、node1節(jié)點安裝cephadm

1.安裝epel源
[root@node1 ~]# yum -y install epel-release
2.安裝ceph源
[root@node1 ~]# yum search release-ceph
上次元數(shù)據(jù)過期檢查:014 前,執(zhí)行于 2023年02月14日 星期二 14時22分00秒。
================= 名稱 匹配:release-ceph ============================================
centos-release-ceph-nautilus.noarch : Ceph Nautilus packages from the CentOS Storage SIG repository
centos-release-ceph-octopus.noarch : Ceph Octopus packages from the CentOS Storage SIG repository
centos-release-ceph-pacific.noarch : Ceph Pacific packages from the CentOS Storage SIG repository
centos-release-ceph-quincy.noarch : Ceph Quincy packages from the CentOS Storage SIG repository
[root@node1 ~]# yum -y install centos-release-ceph-pacific.noarch
3.安裝cephadm
[root@node1 ~]# yum -y install cephadm
4.安裝ceph-common
[root@node1 ~]# yum -y install ceph-common

四、其它節(jié)點安裝docker-ce,python3

具體過程看標(biāo)題一。

五、部署ceph集群

5.1、部署ceph集群,順便把dashboard(圖形控制界面)安裝上

[root@node1 ~]# cephadm bootstrap --mon-ip 172.24.1.6 --allow-fqdn-hostname --initial-dashboard-user admin --initial-dashboard-password redhat --dashboard-password-noupdate
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 0b565668-ace4-11ed-960c-5254000de7a0
Verifying IP 172.24.1.6 port 3300 ...
Verifying IP 172.24.1.6 port 6789 ...
Mon IP `172.24.1.6` is in CIDR network `172.24.1.0/24`
- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v16...
Ceph version: ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 172.24.1.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host node1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:


             URL: https://node1.domain1.example.com:8443/
            User: admin
        Password: redhat


Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:


        sudo /usr/sbin/cephadm shell --fsid 0b565668-ace4-11ed-960c-5254000de7a0 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring


Please consider enabling telemetry to help improve Ceph:


        ceph telemetry on


For more information see:


        https://docs.ceph.com/docs/pacific/mgr/telemetry/


Bootstrap complete.

5.2、把集群公鑰復(fù)制到將成為集群成員的節(jié)點

[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2
[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node3
[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node4

5.3、添加節(jié)點node2,node3,node4(各節(jié)點要先安裝docker-ce,python3)

[root@node1 ~]# ceph orch host add node2 172.24.1.7
Added host 'node2' with addr '172.24.1.7'
[root@node1 ~]# ceph orch host add node3 172.24.1.8
Added host 'node3' with addr '172.24.1.8'
[root@node1 ~]# ceph orch host add node4 172.24.1.9
Added host 'node4' with addr '172.24.1.9'

5.4、給node1、node4打上管理員標(biāo)簽,拷貝ceph配置文件和keyring到node4

[root@node1 ~]# ceph orch host label add node1 _admin
Added label _admin to host node1
[root@node1 ~]# ceph orch host label add node4 _admin
Added label _admin to host node4
[root@node1 ~]# scp /etc/ceph/{*.conf,*.keyring} root@node4:/etc/ceph
[root@node1 ~]# ceph orch host ls
HOST   ADDR        LABELS  STATUS  
node1  172.24.1.6  _admin          
node2  172.24.1.7                  
node3  172.24.1.8                  
node4  172.24.1.9  _admin

5.5、添加mon

[root@node1 ~]# ceph orch apply mon "node1,node2,node3"
Scheduled mon update...

5.6、添加mgr

[root@node1 ~]# ceph orch apply mgr --placement="node1,node2,node3"
Scheduled mgr update...

5.7、添加osd

[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdb
[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdc
[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdd
[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdb
[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdc
[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdd
[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdb
[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdc
[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdd
或者:
[root@node1 ~]# for i in node1 node2 node3; do for j in vdb vdc vdd; do ceph orch daemon add osd $i:/dev/$j; done; done
Created osd(s) 0 on host 'node1'
Created osd(s) 1 on host 'node1'
Created osd(s) 2 on host 'node1'
Created osd(s) 3 on host 'node2'
Created osd(s) 4 on host 'node2'
Created osd(s) 5 on host 'node2'
Created osd(s) 6 on host 'node3'
Created osd(s) 7 on host 'node3'
Created osd(s) 8 on host 'node3'


[root@node1 ~]# ceph orch device ls
HOST   PATH      TYPE  DEVICE ID   SIZE  AVAILABLE  REFRESHED  REJECT REASONS                                                 
node1  /dev/vdb  hdd              10.7G             4m ago     Insufficient space (<10 extents) on vgs, LVM detected, locked  
node1  /dev/vdc  hdd              10.7G             4m ago     Insufficient space (<10 extents) on vgs, LVM detected, locked  
node1  /dev/vdd  hdd              10.7G             4m ago     Insufficient space (<10 extents) on vgs, LVM detected, locked  
node2  /dev/vdb  hdd              10.7G             3m ago     Insufficient space (<10 extents) on vgs, LVM detected, locked  
node2  /dev/vdc  hdd              10.7G             3m ago     Insufficient space (<10 extents) on vgs, LVM detected, locked  
node2  /dev/vdd  hdd              10.7G             3m ago     Insufficient space (<10 extents) on vgs, LVM detected, locked  
node3  /dev/vdb  hdd              10.7G             90s ago    Insufficient space (<10 extents) on vgs, LVM detected, locked  
node3  /dev/vdc  hdd              10.7G             90s ago    Insufficient space (<10 extents) on vgs, LVM detected, locked  
node3  /dev/vdd  hdd              10.7G             90s ago    Insufficient space (<10 extents) on vgs, LVM detected, locked

5.8、至此,ceph集群部署完畢!

[root@node1 ~]# ceph -s
  cluster:
    id:     0b565668-ace4-11ed-960c-5254000de7a0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node1,node2,node3 (age 7m)
    mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxu
    osd: 9 osds: 9 up (since 59s), 9 in (since 81s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   53 MiB used, 90 GiB / 90 GiB avail
    pgs:     1 active+clean

5.9、node4節(jié)點管理ceph

# 在目錄5.4已經(jīng)將ceph配置文件和keyring拷貝到node4節(jié)點
[root@node4 ~]# ceph -s
-bash: ceph: 未找到命令,需要安裝ceph-common
# 安裝ceph源
[root@node4 ~]# yum -y install centos-release-ceph-pacific.noarch
# 安裝ceph-common
[root@node4 ~]# yum -y install ceph-common
[root@node4 ~]# ceph -s
  cluster:
    id:     0b565668-ace4-11ed-960c-5254000de7a0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node1,node2,node3 (age 7m)
    mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxu
    osd: 9 osds: 9 up (since 59s), 9 in (since 81s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   53 MiB used, 90 GiB / 90 GiB avail
    pgs:     1 active+clean







審核編輯:劉清

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • NTP
    NTP
    +關(guān)注

    關(guān)注

    1

    文章

    208

    瀏覽量

    14273
  • python
    +關(guān)注

    關(guān)注

    56

    文章

    4822

    瀏覽量

    85855

原文標(biāo)題:使用cephadm部署ceph集群

文章出處:【微信號:magedu-Linux,微信公眾號:馬哥Linux運維】歡迎添加關(guān)注!文章轉(zhuǎn)載請注明出處。

收藏 人收藏

    評論

    相關(guān)推薦

    Hadoop的集群環(huán)境部署說明

    或者是相同,指令多、步驟繁瑣。有的時候覺得不免覺得很奇怪,這些發(fā)行商為什么不對hadoop的集群環(huán)境部署做一下優(yōu)化?幸運的是總算是讓我找到了一個hadoop發(fā)行版集群環(huán)境搭建簡單易用
    發(fā)表于 10-12 15:51

    Flink集群部署方法

    Flink集群部署詳細(xì)步驟
    發(fā)表于 04-23 11:45

    如何在集群部署時實現(xiàn)分布式session?

    集群部署時的分布式 session 如何實現(xiàn)?
    發(fā)表于 07-17 06:57

    redis集群的如何部署

    redis集群部署(偽分布式)
    發(fā)表于 05-29 17:13

    Docker部署Redis服務(wù)器集群的方法

    Docker部署Redis服務(wù)器集群
    發(fā)表于 06-13 09:12

    請問一下怎樣在X86和RK3399pro去部署RKNN Toolkit

    請問一下怎樣在X86和RK3399pro去部署RKNN Toolkit?
    發(fā)表于 02-16 07:52

    請問鴻蒙系統(tǒng)上可以部署kubernetes集群嗎?

    鴻蒙系統(tǒng)上可以部署kubernetes集群
    發(fā)表于 06-08 11:16

    基于全HDD aarch64服務(wù)器的Ceph性能調(diào)優(yōu)實踐總結(jié)

    和成本之間實現(xiàn)了最佳平衡,可以作為基于arm服務(wù)器來部署存儲的參考設(shè)計。2 Ceph架構(gòu)3 測試集群硬件配置:3臺arm服務(wù)器每臺arm服務(wù)器:軟件配置性能測試工具4 調(diào)優(yōu)方式4.1 硬件調(diào)優(yōu)近些年
    發(fā)表于 07-05 14:26

    Ceph是什么?Ceph的統(tǒng)一存儲方案簡析

    ,一般來說,用于部署Ceph集群物理機(jī)器(或虛擬機(jī))的磁盤數(shù)量與OSD守護(hù)進(jìn)程數(shù)量是一樣的。MDS,元數(shù)據(jù)服務(wù)器,只有Ceph FS才需要,對象存儲場景不需要使用到MDS。小結(jié)本篇簡單
    發(fā)表于 10-08 15:40

    如何部署基于Mesos的Kubernetes集群

    的內(nèi)核。把Kubernetes運行在Mesos集群之上,可以和其他的框架共享集群資源,提高集群資源的利用率。 本文是Kubernetes和Mesos集成指南系列文章第一篇:實戰(zhàn)部署
    發(fā)表于 10-09 18:04 ?0次下載
    如何<b class='flag-5'>部署</b>基于Mesos的Kubernetes<b class='flag-5'>集群</b>

    ceph-zabbix監(jiān)控Ceph集群文件系統(tǒng)

    ceph-zabbix.zip
    發(fā)表于 04-26 09:48 ?2次下載
    <b class='flag-5'>ceph</b>-zabbix監(jiān)控<b class='flag-5'>Ceph</b><b class='flag-5'>集群</b>文件系統(tǒng)

    autobuild-ceph遠(yuǎn)程部署Ceph及自動構(gòu)建Ceph

    autobuild-ceph.zip
    發(fā)表于 05-05 11:09 ?2次下載
    autobuild-<b class='flag-5'>ceph</b>遠(yuǎn)程<b class='flag-5'>部署</b><b class='flag-5'>Ceph</b>及自動構(gòu)建<b class='flag-5'>Ceph</b>

    Kubernetes的集群部署

    Kubeadm是一種Kubernetes集群部署工具,通過kubeadm init命令創(chuàng)建master節(jié)點,通過 kubeadm join命令把node節(jié)點加入到集群
    的頭像 發(fā)表于 02-15 10:35 ?1927次閱讀

    Ceph分布式存儲簡介&amp;Ceph數(shù)據(jù)恢復(fù)流程

    Ceph存儲可分為塊存儲,對象存儲和文件存儲。Ceph基于對象存儲,對外提供三種存儲接口,故稱為統(tǒng)一存儲。 Ceph的底層是RADOS(分布式對象存儲系統(tǒng)),RADOS由兩部分組成:OSD和MON
    的頭像 發(fā)表于 09-26 15:41 ?1078次閱讀

    Helm部署MinIO集群

    Helm部署MinIO集群
    的頭像 發(fā)表于 12-03 09:44 ?988次閱讀
    Helm<b class='flag-5'>部署</b>MinIO<b class='flag-5'>集群</b>