OpenStack搭建

OpenStack搭建

OpenStack搭建_第1张图片

1.前置阶段 环境准备

​ 本实验以 OpenStack(Train版)云计算平台的官方技术文档为依据,逐步引导读者动手完成平台搭建工作。
特别鸣谢两位博主提供的技术指正:CSDN@尼古拉斯程序员 CSDN@m0_60155284

1.1 规划拓扑图

虚拟机 控制节点 计算节点 存储节点 备注
主机名 controller compute storage -
CPU 2核心 2核心 - -
硬盘 100GB 100GB 20GB 建议手动设置磁盘分区
内存 8GB 4GB - -
网卡1 ens33 NAT 192.168.182.136/24 ens33 NAT 192.168.182.137/24 - NAT模式,用于外网通信
网卡2 ens34 host-only 192.168.223.130/24 ens34 host-only 192.168.223.131/24 - host-only模式,用于内网通信

​ 通常OpenStack云计算平台至少需要3台服务器用于节点搭建,因作者资源有限,本次实验采用计算节点复用为存储节点,以实现双节点OpenStack云计算平台的搭建,读者可根据自己条件调整配置多节点平台搭建。

1.2 安装虚拟机并配置双网卡(控制节点)

OpenStack搭建_第2张图片

1.3 安装CentOS7

​ 手动设置磁盘分区

​ > /boot 1G

​ > / 90G

​ > swap 8G

2. 主机配置

2.1 更改主机名(控制节点)

[root@localhost ~]# hostnamectl set-hostname controller 

2.2 配置本地域名解析(控制节点)

[root@controller]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4			# 本地回环地址,表示本地IPv4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6			# 本地IPv6地址
# 添加内网通信地址和域名
192.168.223.130 controller
# 测试与主机的连通性
[root@controller ~]# ping controller
PING controller (192.168.223.130) 56(84) bytes of data.
64 bytes from controller (192.168.223.130): icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from controller (192.168.223.130): icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from controller (192.168.223.130): icmp_seq=3 ttl=64 time=0.082 ms

# 以上结果说明成功配置将IP地址解析到指定主机

2.3 防火墙管理(控制节点)

# 查看防火墙状态
[root@controller ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon   ## 当前是关闭状态
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)  ## enabled开机自启动
   Active: inactive (dead)  ## inactive (dead)说明服务未启动,启动时显示running
     Docs: man:firewalld(1)

712 17:24:45 controller systemd[1]: Starting firewalld - dynamic firewa....
712 17:24:55 controller systemd[1]: Started firewalld - dynamic firewal....
712 17:24:56 controller firewalld[866]: WARNING: AllowZoneDrifting is e....
712 19:54:13 controller systemd[1]: Stopping firewalld - dynamic firewa....
712 19:54:14 controller systemd[1]: Stopped firewalld - dynamic firewal....
Hint: Some lines were ellipsized, use -l to show in full.
# 临时关闭防火墙
[root@controller ~]# systemctl stop firewalld
# 关闭开机自启动
[root@controller ~]# systemctl disable firewalld
# 禁用SELinux
[root@compute ~]# vi /etc/selinux/config	# 关闭开机自启动
ELINUX=disabled

[root@compute ~]# setenforce 0				# 立即禁用
[root@compute ~]# sestatus					# 检查状态
[root@compute ~]# reboot					# 重启系统(若setenforce 0不生效则关闭自启动后执行重启)

2.4 安装基础支持服务(控制节点)

1. Chrony 时间同步服务
[root@controller ~]# vi /etc/chrony.conf
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
# 预设了四个官方提供的NTP服务器,在连接互联网的情况下,主机将选择与这四个之一的服务器进行时间同步
# 写入配置允许某个网段的Chrony客户端使用本机的NTP服务器
allow 192.168.223.0/24
# 重启服务并设置开机自启动
[root@controller ~]# systemctl restart chronyd  
[root@controller ~]# systemctl enable chronyd
# 测试:查看当前客户端与NTP服务器的连接情况
[root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* time.cloudflare.com           3  10   273   992  -3600us[-1688us] +/-  101ms
^+ tick.ntp.infomaniak.ch        1  10   227   545  -4913us[-4913us] +/-  119ms
^+ makaki.miuku.net              3  10   237   155    +39ms[  +39ms] +/-  126ms
^+ electrode.felixc.at           3  10   363   346    -37ms[  -37ms] +/-  170ms
2. OpenStack 云计算平台框架
# 安装OpenStack云计算平台框架
[root@controller ~]# yum install centos-release-openstack-train
# 升级软件包
[root@controller ~]# yum upgrade -y
# 安装OpenStack云计算平台客户端
[root@controller ~]# yum -y install python-openstackclient 
# 测试:查看版本
[root@controller ~]# openstack --version
openstack 4.0.2
# 安装OpenStack SELinux管理包
[root@controller ~]# yum install openstack-selinux
# 如果在安装之前没有手动关闭SELinux,则在安装该软件包时将自动关闭系统的SELinux模块,系统SELinux安全策略将被“openstack-selinux”自动接管

2.5 克隆虚拟机

克隆后开机前刷新MAC地址可以重新生成网卡UID,防止IP冲突

OpenStack搭建_第3张图片

2.6 安装基础支持服务(控制节点【续】)

3. MariaDB数据库服务
# 安装MariaDB数据库
[root@controller ~]# yum install -y mariadb-server python2-PyMySQL
		——————————-————————————————————————————
			mariadb-server 数据库后台服务
			python2-PyMySQL 实现OpenStack与数据库相连的模块	
# 创建数据库配置文件
[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf
[mysqld]										# 声明以下为数据库服务端的配置
bind-address=192.168.223.130					# 绑定远程访问地址,只允许从该地址访问数据库
default-storage-engine=innodb					# 默认存储引擎(Innodb是比较常用的支持事务的存储引擎)
innodb_file_per_table=on						# Innodb引擎的独立表空间,使每张表的数据都单独保存
max_connections=4096							# 最大连接数
collation-server=utf8_general_ci				# 排列字符集(字符集的排序规则,每个字符集都对应一个或多个排列字符集)
character-set-server=utf8						# 字符集
# 启动数据库并设置开机自启动
[root@controller ~]# systemctl enable mariadb 
[root@controller ~]# systemctl start mariadb
# 初始化数据库
[root@controller ~]# mysql_secure_installation
Enter current password for root (enter for none) :        # 输入当前密码,若没有密码则直接按【Enter】键
Set root password?[Y/n]Y								  # 是否设置新密码
New password:000000										  # 输入新密码
Re-enter new password:000000							  # 确认新密码
Remove anonymous users?[Y/n]Y							  # 是否去掉匿名用户
Disallow root login remotely?[Y/n]Y					  	  # 是否禁止root用户远程登陆
Remove test database and access to it?[Y/n]Y  			  # 是否去掉测试数据库
Reload privilege tables now?[Y/n]Y						  # 是否重新加载权限
# 测试:使用数据库
[root@controller ~]# mysql -u root -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.3.20-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.510 sec)
4. RabbitMQ 消息队列服务
# 安装消息队列服务
[root@controller ~]# yum install rabbitmq-server
# 启动服务并设置开机自启动
[root@controller ~]# systemctl enable rabbitmq-server
[root@controller ~]# systemctl start rabbitmq-server 
# 测试:检测服务运行情况(RabbitMQ对外服务端口为5672和25672)
[root@controller ~]# netstat -tnlup | grep -E "(:5672|25672)"
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      1335/beam.smp       
tcp6       0      0 :::5672                 :::*                    LISTEN      1335/beam.smp 
# 端口正在监听(LISTEN)中,说明服务已正常启用

由于Linux版本原因,可能出现-bash:natstat:未找到命令 的情况,Linux 没有 netstat 命令是因为在 Linux 计算机系统中,使用的是基于内核信息来判断网络状态的工具,称之为 ss 命令。虽然 netstat 命令在过去是非常流行和广泛使用的工具,但在较新的 Linux 内核版本中已被淘汰。

# RabbitMQ 消息队列服务管理案例:
# 案例1:创建一个名为“openstack”的用户,密码为“PABBIT_PASSWD”
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASSWD
Creating user "openstack"

# 案例2:将RabbitMQ消息队列中名为“openstack”的用户密码更改为“000000”
[root@controller ~]# rabbitmqctl change_password openstack 000000
Changing password for user "openstack"

# 案例3:设置RabbitMQ消息队列中名为“openstack”的用户权限为:配置、写入、读取
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

# 案例4:查看RabbitMQ消息队列中名为“openstack”的用户权限
[root@controller ~]# rabbitmqctl list_user_permissions openstack
Listing permissions for user "openstack"
/       .*      .*     .*		每一个RabbitMQ服务器定义一个默认的虚拟机“/”,openstack用户对于该虚拟机的所有资源拥有配置、写入、读取的权限

# 案例5:删除RabbitMQ消息队列中名为“openstack”的用户
[root@controller ~]# rabbitmqctl delete_user openstack
Deleting user "openstack"
【注】此处不能删除这个用户,此案例用于告诉读者可以通过这种方式进行消息队列用户的管理,若删除了请手动重新创建,否则会引起后续服务报错!
5. Memcached 内存缓存服务
# 安装内存缓存服务软件
[root@controller ~]# yum -y install memcached python-memcached
		——————————-————————————————————————————
			memcached 内存缓存服务软件
			python-memcached 对该服务进行管理的接口软件
# 安装完成后会自动创建一个名为“memcached”的用户
[root@controller ~]# cat /etc/passwd | grep memcached
memcached:x:985:979:Memcached daemon:/run/memcached:/sbin/nologin
# 配置内存缓存服务
[root@controller ~]# vi /etc/sysconfig/memcached
PORT="11211"						# 服务端口
USER="memcached"					# 用户名(默认使用自动创建的memcached用户)
MAXCONN="1024"						# 允许的最大连接数
CACHESIZE="64"						# 最大缓存大小(MB)
OPTIONS="-l 127.0.0.1,::1"			# 其他选项()默认对本地访问进行监听(可将要监听的地址加入这里)

# 在其他选项中加入内网监听地址
OPTIONS="-l 127.0.0.1,::1,192.168.223.130"
# 启动服务并设置开机自启动
[root@controller ~]# systemctl enable memcached
[root@controller ~]# systemctl start memcached
# 测试:检测服务运行情况
[root@controller ~]# netstat -tnlup | grep memcached
tcp        0      0 192.168.223.130:11211   0.0.0.0:*               LISTEN      10259/memcached     
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      10259/memcached     
tcp6       0      0 ::1:11211               :::*                    LISTEN      10259/memcached 
6. etcd 分布式键-值对存储系统
# 安装etcd 分布式键-值对存储系统
[root@controller ~]# yum install etcd
# 配置服务器
[root@controller ~]# vi /etc/etcd/etcd.conf 
ETCD_LISTEN_PEER_URLS="http://192.168.223.130:2380"		# 用于监听其他etcd成员的地址(只能是地址,不能写域名)
ETCD_LISTEN_CLIENT_URLS="http://192.168.223.130:2379,http://127.0.0.1:2379"	 # 对外提供服务的地址
ETCD_NAME="controller"    # 当前etcd成员的名称(成员必须有唯一名称,建议采用主机名)
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.223.130:2380"  # 列出这个成员的伙伴地址,通告给集群中的其他成员
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.223.130:2379"	# 列出这个成员的客户端地址,通告给集群中的其他成员
ETCD_INITIAL_CLUSTER="controller=http://192.168.223.130:2380"	# 启动初始化集群配置,值为“成员名=该成员服务地址”
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"	# 初始化etcd集群标识,用于多个etcd集群相互识别
ETCD_INITIAL_CLUSTER_STATE="new"   # 初始化集群状态(新建值为“new”,已存在值为“existing”,如果被设置为existing则etcd将试图加入现有集群)
# 启动服务并设置开机自启动
[root@controller ~]# systemctl enable etcd
[root@controller ~]# systemctl start etcd
# 测试:检测服务运行情况
[root@controller ~]# netstat -tnlup | grep etcd
tcp        0      0 192.168.223.130:2379    0.0.0.0:*               LISTEN      11248/etcd          
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      11248/etcd          
tcp        0      0 192.168.223.130:2380    0.0.0.0:*               LISTEN      11248/etcd          
 # etcd服务管理案例
 # 案例1:向etcd中存入一个键值对,键为“testkey”,值为“001”
 [root@controller ~]# etcdctl set testkey 001
  001
 # 案例2:从etcd中读取testkey所对应的值
 [root@controller ~]# etcdctl get testkey 
  001

2.5 从网络中获取安装包(控制节点)

# 安装本地源制作工具
[root@controller ~]# yum -y install yum-utils createrepo yum-plugin-priorities
		————————————————————————————————————
			yum-utils 一个YUM工具包,其中的reposync软件用于将网络中的YUM源数据同步到本地
			createepo 用于将同步到本地的数据生成对应的软件仓库
			yum-plugin-priorities 是一个插件,用于管理YUM源中软件的优先级
# 创建YUM源文件
[root@controller ~]# cd /etc/yum.repos.d/			# 进入YUM源配置文件目录
[root@controller yum.repos.d]# mkdir bak			# 创建一个备份目录
[root@controller yum.repos.d]# mv *.repo bak		# 将所有的“repo”文件移动到本分文件夹中保存
[root@controller yum.repos.d]# vi OpenStack.repo	# 创建出一个新的配置文件OpenStack.repo
# 在OpenStack.repo中写入配置信息
[base]
name=base
baseurl=http://repo.huaweicloud.com/centos/7/os/x86_64/
enable=1
gpgcheck=0
[extras]
name=extras
baseurl=http://repo.huaweicloud.com/centos/7/extras/x86_64/
enable=1
gpgcheck=0
[updates]
name=updates
baseurl=http://repo.huaweicloud.com/centos/7/updates/x86_64/
enable=1
gpgcheck=0
[train]
name=train
baseurl=http://repo.huaweicloud.com/centos/7/cloud/x86_64/openstack-train/
enable=1
gpgcheck=0
[virt]
name=virt
baseurl=http://repo.huaweicloud.com/centos/7/virt/x86_64/kvm-common/
enable=1
gpgcheck=0
# 使用了华为云提供的CentOS Linux的国内镜像,包含5个仓库:“base” “extras” “updates” “train” “virt”
# 检查yum源是否可用
[root@controller ~]# yum clean all		# 清除缓存
[root@controller ~]# yum makecache		# 重建缓存
[root@controller ~]# yum repolist		# 查看已启用的仓库列表
已加载插件:fastestmirror, langpacks, priorities
Loading mirror speeds from cached hostfile
源标识                       源名称                         状态
base                        base                          10,072
extras                      extras                           518
train                       train                          3,168
updates                     updates                        5,061
virt                        virt                              63
repolist: 18,882
# 同步远端安装包到本地
[root@controller ~]# mkdir /opt/openstack   # 在/opt目录下创建一个空目录
[root@controller ~]# cd /opt/openstack      # 转换目录到新目录
[root@controller openstack]# reposync		# 下载所有软件仓库文件到本地 (该命令将从网络中下载近30G的文件,耗时较长)
# 创建YUM源
[root@controller openstack]# createrepo -v base
[root@controller openstack]# createrepo -v updates
[root@controller openstack]# createrepo -v extras
[root@controller openstack]# createrepo -v train
[root@controller openstack]# createrepo -v virt

2.7 更改主机名(计算节点)

[root@localhost ~]# hostnamectl set-hostname compute

2.8 配置本地域名解析(计算节点、控制节点)

[root@controller]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4			# 本地回环地址,表示本地IPv4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6			# 本地IPv6地址
192.168.223.129 controller																# 添加内网通信地址和域名
192.168.223.128 compute

[root@compute]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4			# 本地回环地址,表示本地IPv4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6			# 本地IPv6地址
192.168.223.129 controller																# 添加内网通信地址和域名
192.168.223.128 compute
# 测试:检查与主机的连通性
[root@compute ~]# ping compute
PING compute (192.168.223.131) 56(84) bytes of data.
64 bytes from compute (192.168.223.131): icmp_seq=1 ttl=64 time=0.044 ms
64 bytes from compute (192.168.223.131): icmp_seq=2 ttl=64 time=0.077 ms
64 bytes from compute (192.168.223.131): icmp_seq=3 ttl=64 time=0.043 ms

[root@compute ~]# ping controller
PING controller (192.168.223.130) 56(84) bytes of data.
64 bytes from controller (192.168.223.130): icmp_seq=1 ttl=64 time=1.72 ms
64 bytes from controller (192.168.223.130): icmp_seq=2 ttl=64 time=1.82 ms
64 bytes from controller (192.168.223.130): icmp_seq=3 ttl=64 time=0.373 ms

# 以上结果说明成功配置将IP地址解析到指定主机

2.9 系统防火墙管理(计算节点)

# 1.检查SELinux状态(关闭)
[root@compute ~]# sestatus					
SELinux status:                 disabled
# 检查系统防火墙状态(关闭)
[root@compute ~]# systemctl status firewalld 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

2.10 搭建本地软件仓库

2.10.1 配置YUM源(控制节点)
  1. 修改YUM源文件,使其指向本地文件

    [root@controller ~]# cd /etc/yum.repos.d/
    [root@controller yum.repos.d]# vi OpenStack.repo 
    [base]
    name=base
    baseurl=file:///opt/openstack/base/
    enable=1
    gpgcheck=0
    [extras]
    name=extras
    baseurl=file:///opt/openstack/extras/
    enable=1
    gpgcheck=0
    [updates]
    name=updates
    baseurl=file:///opt/openstack/updates/
    enable=1
    gpgcheck=0
    [train]
    name=train
    baseurl=file:///opt/openstack/train/
    enable=1
    gpgcheck=0
    [virt]
    name=virt
    baseurl=file:///opt/openstack/virt/
    enable=1
    gpgcheck=0
    
  2. 清除并重建YUM缓存

    [root@controller yum.repos.d]# yum clean all
    [root@controller yum.repos.d]# yum makecache
    
  3. 测试:检查YUM源是否可用

    [root@controller yum.repos.d]# yum repolist
    已加载插件:fastestmirror, langpacks, priorities
    Loading mirror speeds from cached hostfile
    源标识                              源名称                               状态
    base                                base                                 10,072
    extras                              extras                                  518
    train                               train                                 3,168
    updates                             updates                               5,061
    virt                                virt                                     63
    repolist: 18,882
    # 正常情况下能列出这5个库,说明配置正确
    
2.10.2 配置 FTP 服务器(控制节点)
  1. 安装FTP软件包

    [root@controller ~]# yum install -y vsftpd
    
  2. 配置FTP主目录为软件仓库目录

    [root@controller ~]# vim /etc/vsftpd/vsftpd.conf 
    anon_root=/opt	  # 将匿名用户访问的主目录指向软件仓库
    
  3. 启动FTP服务

    [root@controller ~]# systemctl start vsftpd
    [root@controller ~]# systemctl enable vsftpd
    
2.10.3 配置YUM源(计算节点)
# 备份本地源
[root@compute ~]# cd /etc/yum.repos.d/
[root@compute yum.repos.d]# mkdir bak
[root@compute yum.repos.d]# mv *.repo bak
[root@compute yum.repos.d]# ls
bak
# 将控制节点的OpenStack.repo拷贝到计算节点
[root@compute yum.repos.d]# scp root@controller:/etc/yum.repos.d/OpenStack.repo /etc/yum.repos.d/ 
The authenticity of host 'controller (192.168.223.130)' can't be established.
ECDSA key fingerprint is SHA256:eISfhBqeL5CkKnn5As40KmbML214dO/UMnkr3kOPLA4.
ECDSA key fingerprint is MD5:25:40:9e:0f:fe:cc:8a:bc:bc:25:03:d4:6e:8c:4b:f1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'controller,192.168.223.130' (ECDSA) to the list of known hosts.
root@controller's password: 
OpenStack.repo                                                                           100%  383   168.8KB/s   00:00    

[root@compute yum.repos.d]# ls
bak  OpenStack.repo
# 编辑YUM源配置文件,使用控制节点FTP服务器中的软件仓库
[root@compute yum.repos.d]# vim OpenStack.repo 
[base]
name=base
baseurl=ftp://controller/openstack/base/
enable=1
gpgcheck=0
[extras]
name=extras
baseurl=ftp://controller/openstack/extras/
enable=1
gpgcheck=0
[updates]
name=updates
baseurl=ftp://controller/openstack/updates/
enable=1
gpgcheck=0
[train]
name=train
baseurl=ftp://controller/openstack/train/
enable=1
gpgcheck=0
[virt]
name=virt
baseurl=ftp://controller/openstack/virt/
enable=1
gpgcheck=0
# 清除并重建YUM源缓存
[root@compute ~]# yum clean all
[root@compute ~]# yum makecache

2.11 工具安装(计算、控制节点)

[root@compute ~]# yum -y install net-tool
[root@controller ~]# yum -y install net-tool

2.12 修改Chrony时间同步服务配置

2.13 设置局域网时间同步

  1. 配置控制节点为NTP时间服务器

    [root@controller ~]# vim /etc/chrony.conf 
    
    # 删除默认的同步服务器
    server 0.centos.pool.ntp.org iburst
    server 1.centos.pool.ntp.org iburst
    server 2.centos.pool.ntp.org iburst
    server 3.centos.pool.ntp.org iburst
    
    # 增加阿里云NTP服务器(此配置仅在连接互联网时有效)
    server ntp.aliyun.com iburst
    
    # 当外网NTP服务器不可用时,采用本地时间作为同步标准
    local stratum 1
    
    # 在配置文件中设置允许同网段的主机使用本机的NTP服务
    allow 192.168.223.0/24
    
    # 重启服务
    [root@controller ~]# systemctl restart chronyd
    
  2. 配置计算节点与控制节点时间同步

    [root@compute ~]# vim /etc/chrony.conf
    
    # 删除默认的同步服务器
    server 0.centos.pool.ntp.org iburst
    server 1.centos.pool.ntp.org iburst
    server 2.centos.pool.ntp.org iburst
    server 3.centos.pool.ntp.org iburst
    
    # 增加控制节点的时间同步服务器(使计算节点与控制节点进行对时)
    server controller iburst
    
    # 重启服务
    [root@compute ~]# systemctl restart chronyd
    
    # 测试:检查时间同步服务状态
    [root@compute ~]# chronyc sources
    [root@compute ~]# chronyc sources
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample               
    ===============================================================================
    ^* controller                    3   6    17    10    +56us[ +120us] +/-   40ms      # 星号表示连接成功
    

2.14 保存快照

OpenStack搭建_第4张图片

OpenStack搭建_第5张图片

2.15 主机配置自检工单

3.认证服务(Keystone)安装(控制节点)

3.1 安装与配置Keystone

3.1.1 安装Keystone软件包
[root@controller ~]# yum -y install openstack-keystone httpd mod_wsgi
				——————————————————————————————————————————
					openstack-keystone 是Keystone的软件包(安装时会创建“keystone”的Linux用户和用户组)
					httpd   阿帕奇Web服务
					mod_wsgi   是使Web服务器支持WSGI的插件
# Keystone是运行在Web服务器上的一个支持Web服务器网关接口(Web Server Gateway Interface,WSGI)的Web应用
# 测试:查看Keystone用户及同名用户组是否被创建
[root@controller ~]# cat /etc/passwd | grep keystone
keystone:x:163:163:OpenStack Keystone Daemons:/var/lib/keystone:/sbin/nologin

[root@controller ~]# cat /etc/group | grep keystone
keystone:x:163:
3.1.2 创建Keystone的数据库并授权
  1. 进入数据库服务器

    [root@controller ~]# mysql -u root -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 11
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
  2. 创建名为“keystone”的数据库

    MariaDB [(none)]> CREATE DATABASE keystone;
    Query OK, 1 row affected (0.054 sec)
    
  3. 给用户授权使用该数据库

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.254 sec)
    # 把“keystone”数据库的所有表(keystone.*)的所有权限(ALL PRIVILEGES)授予本地主机('localhost')上登录名为“keystone”的用户,验证密码为“000000”
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.001 sec)
    # 把“keystone”数据库的所有表(keystone.*)的所有权限(ALL PRIVILEGES)授予任意远程主机('%')上登录名为“keystone”的用户,验证密码为“000000”
    
  4. 退出数据库

    MariaDB [(none)]> quit
    Bye
    
3.1.3 修改Keystone配置文件
[root@controller ~]# vim /etc/keystone/keystone.conf 
# 配置文件中找到[database]部分(第600行),该部分用于实现与数据库的连接,添加以下内容
connection = mysql+pymysql://keystone:000000@controller/keystone
# 配置数据库连接信息,用keystone用户和密码“000000”去连接“controller”主机中名为“keystone”的数据库
# 配置文件中[token](第2476行)部分,用于配置令牌的加密方式,取消注释使其生效
provider = fernet
# Fernet Token是当前主流推荐的令牌加密格式,是一种轻量级的消息格式。
3.1.4 初始化keystone的数据库
  1. 同步数据库

    [root@controller ~]# su keystone -s /bin/sh -c "keystone-manage db_sync"
    				——————————————————————————————————
    					su keystone  切换为keystone用户(因为该用户拥有对keystone数据库的管理权限)
    					-s /bin/sh   su命令的选项,用于指定编译器
    					-c	 		 su命令的选项,用于指定执行某完命令,结束后自动切换为原用户
    

    keystone服务管理工具

       语法:keystone-manage [OPTION] 	
    
    OPTION EXPLAIN
    db_sync 同步数据库
    fernet_setup 创建一个Fernet密钥库,用于令牌加密
    credential_setup 创建一个Fernet密钥库,用于凭证加密
    bootstrap 初始化身份认证信息,并将这些信息存入数据库
    token_flush 清除过期的令牌
  2. 测试:检查数据库

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 14
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
    MariaDB [(none)]> USE keystone;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    
    Database changed
    
    MariaDB [keystone]> SHOW TABLES;
    +------------------------------------+
    | Tables_in_keystone                 |
    +------------------------------------+
    | access_rule                        |
    | access_token                       |
    | application_credential             |
    | application_credential_access_rule |
    | application_credential_role        |
    | assignment                         |
    | config_register                    |
    # 总共48行
    

3.2 Keystone 组件初始化

3.2.1 初始化Ferent密钥库
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# 该命令会自动创建“/etc/keystone/fernet-keys/”目录,并在改目录下生成两个Fernet密钥,用于加密和解密令牌。

# 验证如下:
[root@controller ~]# ls /etc/keystone/fernet-keys/
0  1
[root@controller ~]#  keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# 该命令会自动创建“/etc/keystone/credential-keys/”目录,并在该目录下生成两个Fernet密钥,用于加密/解密用户凭证

# 验证如下
[root@controller ~]# ls /etc/keystone/credential-keys/
0  1
3.2.2 初始化用户身份信息

​ openstack有一个默认用户为“admin”,但现在还没有对应的登陆信息。使用“keystone-manage bootstrap”命令给“admin”用户初始化登录凭证,以后登录时将出示凭证与此进行对比即可进行认证。

[root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 --bootstrap-admin-url http://controller:5000/v3 --bootstrap-internal-url http://controller:5000/v3 --bootstrap-public-url http://controller:5000/v3 --bootstrap-region-id RegionOne
# 配置初始化admin用户密码为000000

keystone-manage bootstrap命令参数说明

	语法:keystone-manage bootstrap 	
OPERATE EXPLAIN
–bootstrap-username 设置登录用户名,如果没有该参数则默认登录用户为“admin”用户
–bootstrap-password 设置“admin”用户的密码
–bootstrap-admin-url 设置“admin”用户的服务端点
–bootstrap-internal-url 设置内部用户使用的服务端点
–bootstrap-public-url 设置公共用户使用的服务端点
–bootstrap-region-id 设置区域ID名称,用于配置集群服务
3.2.3 配置Web服务
  1. 为Apache服务器增加WSGI支持

    [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d
    # 将“wsgi-keystone.conf”文件软链接到“/etc/httpd/conf.d/”目录
    
  2. 修改Apache服务器配置并启动Apache服务

    [root@controller ~]# vim /etc/httpd/conf/httpd.conf 
    
    # 修改ServerName的值为web服务器所在的域名或IP
    ServerName controller
    # (第95行)
    
    # 启动服务并设置开机自启动
    [root@controller ~]# systemctl enable httpd
    [root@controller ~]# systemctl start httpd
    
3.2.4 模拟登录验证
  1. 创建初始化环境变量文件

    [root@controller ~]# vim admin-login					# 该文件用于存储身份凭证
    
    export OS_USERNAME=admin							# 登录openstack云计算平台的用户名
    export OS_PASSWORD=000000							# 登陆密码
    export OS_PROJECT_NAME=admin						
    export OS_USER_DOMAIN_NAME=Default					# 用户属于的域
    export OS_PROJECT_DOMAIN_NAME=Default				# 项目属于的域
    export OS_AUTH_URL=HTTP://controller:5000/v3		# 认证地址
    export OS_IDENTITY_API_VERSION=3					# keystone版本号
    export OS_IMAGE_API_VERSION=2						# 镜像管理应用版本号
    
  2. 导入环境变量进行验证

    # 将身份凭证导入环境变量
    [root@controller ~]# source admin-login
    
    # 测试:查看现有环境变量
    [root@controller ~]# export -p
    declare -x OS_AUTH_URL="HTTP://controller:5000/v3"
    declare -x OS_IDENTITY_API_VERSION="3"
    declare -x OS_IMAGE_API_VERSION="2"
    declare -x OS_PASSWORD="000000"
    declare -x OS_PROJECT_DOMAIN_NAME="Default"
    declare -x OS_PROJECT_NAME="admin"
    declare -x OS_USERNAME="admin"
    declare -x OS_USER_DOMAIN_NAME="Default"
    # 说明环境变量已经导入成功
    
3.2.5 检测Keystone服务
  1. 创建与查阅项目列表

    # 1.创建名为“project”的项目
    [root@controller ~]# openstack project create --domain default project
    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description |                                  |
    | domain_id   | default                          |
    | enabled     | True                             |
    | id          | 0e284b9459e14b40801ce2bffb2f5e0a |
    | is_domain   | False                            |
    | name        | project                          |
    | options     | {}                               |
    | parent_id   | default                          |
    | tags        | []                               |
    +-------------+----------------------------------+
    # openstack project create 创建一个项目
    # --domain default         该项目属于default域
    # project				   项目名称
    
    # 2.查看现有项目列表
    [root@controller ~]# openstack project list
    +----------------------------------+---------+
    | ID                               | Name    |
    +----------------------------------+---------+
    | 0e284b9459e14b40801ce2bffb2f5e0a | project |   # 上一步骤创建的项目
    | 75697606e21045f188036410b6e5ac90 | admin   |
    +----------------------------------+---------+
    
  2. 创建角色与查阅角色列表

    # 1.创建名为“user”的角色
    [root@controller ~]# openstack role create user
    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description | None                             |
    | domain_id   | None                             |
    | id          | 2ea89bb8766c48fd8167194be6f087d0 |
    | name        | user                             |
    | options     | {}                               |
    +-------------+----------------------------------+
    
    # 查看现有角色列表
    [root@controller ~]# openstack role list
    +----------------------------------+--------+
    | ID                               | Name   |
    +----------------------------------+--------+
    | 1fa5a3a626db402aa833f9d22e69a23e | member |
    | 2ea89bb8766c48fd8167194be6f087d0 | user   |    # 上一步骤创建的角色
    | 78d64a433e8647379b290a9d02f5cc2a | admin  |
    | a33d50fbc7834985975f231f494093d8 | reader |
    +----------------------------------+--------+
    
  3. 查看域列表、用户列表

    # 1.查看现有域列表
    [root@controller ~]# openstack domain list
    +---------+---------+---------+--------------------+
    | ID      | Name    | Enabled | Description        |
    +---------+---------+---------+--------------------+
    | default | Default | True    | The default domain |
    +---------+---------+---------+--------------------+
    
    # 2.查看现有用户列表
    [root@controller ~]# openstack user list
    +----------------------------------+-------+
    | ID                               | Name  |
    +----------------------------------+-------+
    | 157c2fe27dc54c8baa467a035274ec00 | admin |
    +----------------------------------+-------+
    
3.2.6 Keystone自检工单

检查无误后保存控制节点快照2:Keystone安装完成

4.镜像服务(Glance)安装(控制节点)

4.1 安装配置Glance镜像服务

4.1.1 安装Glance软件包
[root@controller ~]# yum -y install openstack-glance
# 安装“openstack-glance”时会自动在CentO S中生成一个名为“glance”的用户和同名用户组
[root@controller ~]# cat /etc/passwd | grep glance
glance:x:161:161:OpenStack Glance Daemons:/var/lib/glance:/sbin/nologin

[root@controller ~]# cat /etc/group | grep glance
glance:x:161:
4.1.2 创建Glance的数据库并授权
  1. 进入数据库服务器

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 22
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
  2. 新建 “glance” 数据库

    MariaDB [(none)]> CREATE DATABASE glance;
    Query OK, 1 row affected (0.003 sec)
    
  3. 为用户授权使用该数据库

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.019 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.000 sec)
    
4.1.3 修改Glance配置文件
  1. 备份配置文件

    [root@controller ~]# cp /etc/glance/glance-api.conf  /etc/glance/glance-api.bak
    
  2. 去掉配置文件中所有注释和空行并写入原文件

    [root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.bak > /etc/glance/glance-api.conf 
    
    [root@controller ~]# vim /etc/glance/glance-api.conf 
    [DEFAULT]
    [cinder]
    [cors]
    [database]
    [file]
    [glance.store.http.store]
    [glance.store.rbd.store]
    [glance.store.sheepdog.store]
    [glance.store.swift.store]
    [glance.store.vmware_datastore.store]
    [glance_store]
    [image_format]
    [keystone_authtoken]
    [oslo_concurrency]
    [oslo_messaging_amqp]
    [oslo_messaging_kafka]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_middleware]
    [oslo_policy]
    [paste_deploy]
    [profiler]
    [store_type_location_strategy]
    [task]
    [taskflow_executor]
    
  3. 编辑配置文件

    [database]  # 该部分用于实现与数据库的连接
    connection = mysql+pymysql://glance:000000@controller/glance
    
    # 实现keystone的交互
    [keystone_authtoken]
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    username = glance
    password = 000000
    project_name = project
    user_domain_name = Default
    project_domain_name  = Default
    
    [paste_deploy]
    flavor = keystone
    
    # 指定后端存储系统
    [glance_store]
    stores = file
    default_store = file
    filesystem_store_datadir = /var/lib/glance/images/
    # 【注】“/var/lib/glance”文件夹是在安装Glance的时候自动生成的,“glance”用户具有该文件夹的完全操作权限
    
4.1.4 初始化Glance的数据库
  1. 同步数据库

    [root@controller ~]# su glance -s /bin/sh -c "glance-manage db_sync"
    Database is synced successfully.
    
  2. 检查数据库

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 25
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
    MariaDB [(none)]> use glance;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    
    Database changed
    
    MariaDB [glance]> show tables;
    +----------------------------------+
    | Tables_in_glance                 |
    +----------------------------------+
    | alembic_version                  |
    | image_locations                  |
    | image_members                    |
    | image_properties                 |
    | image_tags                       |
    | images                           |
    | metadef_namespace_resource_types |
    | metadef_namespaces               |
    | metadef_objects                  |
    | metadef_properties               |
    | metadef_resource_types           |
    | metadef_tags                     |
    | migrate_version                  |
    | task_info                        |
    | tasks                            |
    +----------------------------------+
    15 rows in set (0.002 sec)
    # glance数据库中存在以上数据表表示同步成功
    

4.2 Glance组件初始化

4.2.1 创建Glance用户并分配角色
  1. 导入环境变量模拟登录

    [root@controller ~]# . admin-login
    # 也可使用“source”代替“.”执行环境变量导入操作
    
  2. 在OpenStack云计算平台中创建用户“glance”

    [root@controller ~]# openstack user create --domain default --password 000000 glance
    +---------------------+----------------------------------+
    | Field               | Value                            |
    +---------------------+----------------------------------+
    | domain_id           | default                          |
    | enabled             | True                             |
    | id                  | bee66d0f8d7b4ff2ad16400cdc0f7138 |
    | name                | glance                           |
    | options             | {}                               |
    | password_expires_at | None                             |
    +---------------------+----------------------------------+
    # 在“default”域中创建一个名为“glance”的用户,密码为“000000”
    

    此处设置的用户名和密码一定要与 “/etc/glance/glance-api.conf" 文件中的 “[keystone_authtoken]" 中的用户名和密码一致。

  3. 为用户 “glance” 分配 “admin” 角色

    [root@controller ~]# openstack role add --project project --user glance admin
    # 授予“glance”用户操作“project”项目时的“admin”的权限
    
4.2.2 创建Glance服务及服务端点
  1. 创建服务

    [root@controller ~]# openstack service create --name glance image     # 创建一个名为“glance”、类型为“image”的服务
    +---------+----------------------------------+
    | Field   | Value                            |
    +---------+----------------------------------+
    | enabled | True                             |
    | id      | 6f106feeb8ec40838aa189ef94aafc4c |
    | name    | glance                           |
    | type    | image                            |
    +---------+----------------------------------+
    
    
  2. 创建镜像服务端点

    OpenStack组件的服务端点有3种,分别对应公众用户(public)、内部组件(internal)、Admin用户(admin)服务的地址。

    # 1.创建公众用户访问的服务端点
    [root@controller ~]# openstack endpoint create --region RegionOne glance public http://controller:9292
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 85af1b039d954f24b9bd1d1ed7cc1564 |
    | interface    | public                           |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | 6f106feeb8ec40838aa189ef94aafc4c |
    | service_name | glance                           |
    | service_type | image                            |
    | url          | http://controller:9292           |
    +--------------+----------------------------------+
    
    # 2.创建内部组件访问的服务端点
    [root@controller ~]# openstack endpoint create --region RegionOne glance internal http://controller:9292
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 7f0a84e96c79440eb570c8f10c96e779 |
    | interface    | internal                         |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | 6f106feeb8ec40838aa189ef94aafc4c |
    | service_name | glance                           |
    | service_type | image                            |
    | url          | http://controller:9292           |
    +--------------+----------------------------------+
    
    # 3.创建Admin用户访问的服务端点
    [root@controller ~]# openstack endpoint create --region RegionOne glance admin http://controller:9292
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 984d6855f1c64a4cac0d3ecc25a7432d |
    | interface    | admin                            |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | 6f106feeb8ec40838aa189ef94aafc4c |
    | service_name | glance                           |
    | service_type | image                            |
    | url          | http://controller:9292           |
    +--------------+----------------------------------+
    
4.2.3 启动Glance服务
[root@controller ~]# systemctl enable openstack-glance-api
[root@controller ~]# systemctl start openstack-glance-api

4.3 验证Glance服务

# 1.查看端口占用情况
[root@controller ~]# netstat -tnlup | grep 9292
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      6650/python2  
# 2.查看服务运行状态
[root@controller ~]# systemctl status openstack-glance-api
● openstack-glance-api.service - OpenStack Image Service (code-named Glance) API server
   Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2023-07-17 09:59:24 CST; 3min 29s ago

4.4 用Glance制作镜像

将官网下载的CirrOS镜像(cirros-0.5.1-x86_64-disk.img)拷贝至 “/root” 目录下

C:\Users\admin>scp F:\openstack环境\cirros-0.5.1-x86_64-disk.img [email protected]:/root/
cirros-0.5.1-x86_64-disk.img                                                          100%   16MB  33.8MB/s   00:00
[root@controller ~]# ls
cirros-0.5.1-x86_64-disk.img 
# 1.调用Glance创建镜像
[root@controller ~]# openstack image create --file cirros-0.5.1-x86_64-disk.img  --disk-format qcow2 --container-format bare --public cirros
		_________________________________________
			“openstack image create”语句对镜像(image)执行创建(create)操作,创建了一个名为“cirros”的公有(public)镜像,它由当前目录				的“cirros-0.5.1-x86_64-disk.img”文件制作而成,生成的镜像磁盘格式为“qcow2”,容器格式为“bare”。
# 2.查看镜像
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 08a58e13-dab2-4378-87c4-24dc6bd99b75 | cirros | active |
+--------------------------------------+--------+--------+

镜像状态说明

状态 说明
queued 表示Glance注册表中已保留该镜像标识,但还没有镜像数据上传到Glance中
saving 表示镜像的原始数据正在上传到Glance中
active 表示在Glance中完全可用的镜像
deactivated 表示不允许任何非管理员用户访问镜像数据
killed 表示在上传镜像数据期间发生错误,且镜像不可读
deleted Glance保留了关于镜像的信息,但镜像不再可用(此状态下的镜像将在以后自动删除)
ending_delete Glance尚未删除的镜像数据(处于此状态的镜像无法恢复)
# 3.查看物理文件(在glance-api.conf配置文件中定义了镜像文件的存储位置为/var/lib/glance/images)
[root@controller ~]# ll /var/lib/glance/images/
总用量 15956
-rw-r----- 1 glance glance 16338944 720 16:17 08a58e13-dab2-4378-87c4-24dc6bd99b75

4.5 Glance安装自检工单

检查无误后保存控制节点快照3:Glance安装完成

5.放置服务(Placement)安装(控制节点)

5.1 安装与配置Placement服务

5.1.1 安装placement软件包
[root@controller ~]# yum install openstack-placement-api
# 安装上述软件包时会自动创建名为“placement”的用户和同名用户组
[root@controller ~]# cat /etc/passwd | grep placement
placement:x:983:977:OpenStack Placement:/:/bin/bash

[root@controller ~]# cat /etc/group | grep placement
placement:x:977:
5.1.2 创建Placement的数据库并授权
  1. 进入数据库

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 38
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
  2. 新建 ”placement“ 数据库

    MariaDB [(none)]> CREATE DATABASE placement;
    Query OK, 1 row affected (0.000 sec)
    
  3. 为数据库授权

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.008 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.001 sec)
    
5.1.3 修改Placement配置文件
  1. 去掉配置文件中的注释和空行

    # 1.备份配置文件
    [root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.bak
    [root@controller ~]# ls /etc/placement/
    placement.bak  placement.conf  policy.json
    
    # 2.去除注释和空行,生成新的配置文件
    [root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.bak > /etc/placement/placement.conf 
    [root@controller ~]# cat /etc/placement/placement.conf 
    [DEFAULT]
    [api]
    [cors]
    [keystone_authtoken]
    [oslo_policy]
    [placement]
    [placement_database]
    [profiler]
    
  2. 修改配置

    [root@controller ~]# vim /etc/placement/placement.conf 
    
    # 实现数据库连接
    [placement_database]
    connection = mysql+pymysql://placement:000000@controller/placement
    
    # 实现与keystone交互
    [api]
    auth_strategy = keystone
    
    [keystone_authtoken]
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = project
    username = placement
    password = 000000
    
5.1.4 修改Apache配置文件
[root@controller ~]# vim /etc/httpd/conf.d/00-placement-api.conf 
# 在"VirtualHost"节点中加入以下内容(意思时告诉Web服务器,如果Apache的版本号大于或等于2.4,则向系统请求活动“/usr/bin/”目录的所有操作权限)
<Directory /usr/bin>
	<IfVersion >= 2.4>
		Require all granted
	</IfVersion>
</Directory>
# 查看当前Web服务器版本
[root@controller ~]# httpd -v
Server version: Apache/2.4.6 (CentOS)
5.1.5 初始化Placement的数据库
  1. 同步数据库

    [root@controller ~]# su placement -s /bin/sh -c "placement-manage db sync"
    
  2. 检查数据库是否同步成功

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 41
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
    MariaDB [(none)]> use placement;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    
    Database changed
    
    MariaDB [placement]> show tables;
    +------------------------------+
    | Tables_in_placement          |
    +------------------------------+
    | alembic_version              |
    | allocations                  |
    | consumers                    |
    | inventories                  |
    | placement_aggregates         |
    | projects                     |
    | resource_classes             |
    | resource_provider_aggregates |
    | resource_provider_traits     |
    | resource_providers           |
    | traits                       |
    | users                        |
    +------------------------------+
    12 rows in set (0.001 sec)
    

5.2 Placement组件初始化

5.2.1 创建Placement用户并分配角色
  1. 导入环境变量模拟登录

    [root@controller ~]# source admin-login
    
  2. 在OpenStack云计算平台中创建用户 “placement”

    [root@controller ~]# openstack user create --domain default --password 000000 placement
    +---------------------+----------------------------------+
    | Field               | Value                            |
    +---------------------+----------------------------------+
    | domain_id           | default                          |
    | enabled             | True                             |
    | id                  | 5b7a7b4f7e9144888ba23857e5cb828d |
    | name                | placement                        |
    | options             | {}                               |
    | password_expires_at | None                             |
    +---------------------+----------------------------------+
    # 在“default”域中创建了一个名为“placement”、密码为“000000”的用户
    

    此处设置的用户名和密码一定要与 “/etc/placement/placement.conf" 文件中的 “[keystone_authtoken]" 中的用户名和密码一致。

  3. 为用户 “placement” 分配 “admin” 角色

    # 授予“placement”用户操作“project”项目时的“admin”权限
    [root@controller ~]# openstack role add --project project --user placement admin
    
5.2.2 创建Placement服务及服务端点
  1. 创建服务

    [root@controller ~]# openstack service create --name placement placement
    +---------+----------------------------------+
    | Field   | Value                            |
    +---------+----------------------------------+
    | enabled | True                             |
    | id      | aed5079ceead4dae80b084d59e0b71d6 |
    | name    | placement                        |
    | type    | placement                        |
    +---------+----------------------------------+
    
  2. 创建服务端点

    # 1.创建公众用户访问端点
    [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | e1e889616b2d47cc82f072ad5cfa08f4 |
    | interface    | public                           |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | aed5079ceead4dae80b084d59e0b71d6 |
    | service_name | placement                        |
    | service_type | placement                        |
    | url          | http://controller:8778           |
    +--------------+----------------------------------+
    
    # 2.创建内部组件访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 18bab6d33adf4a0586f7e74dd5f1078a |
    | interface    | internal                         |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | aed5079ceead4dae80b084d59e0b71d6 |
    | service_name | placement                        |
    | service_type | placement                        |
    | url          | http://controller:8778           |
    +--------------+----------------------------------+
    
    # 3.创建Admin用户访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 47dc11279c3e42feaa8abcf0c19cc517 |
    | interface    | admin                            |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | aed5079ceead4dae80b084d59e0b71d6 |
    | service_name | placement                        |
    | service_type | placement                        |
    | url          | http://controller:8778           |
    +--------------+----------------------------------+
    
5.2.3 启动Placement服务
# 和Keystone以及Glance一样,需要借助Apache的Web服务实现功能,故重启Apache服务即可使配置文件生效
[root@controller ~]# systemctl restart httpd

5.3 检测Placement服务

  1. 检测端口占用情况

    [root@controller ~]# netstat -tnlup | grep 8778
    tcp6       0      0 :::8778                 :::*                    LISTEN      21474/httpd         
    
  2. 检验服务端点

    [root@controller ~]# curl http://controller:8778
    {"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}
    

5.4 Placement安装自检工单

检查无误后保存控制节点快照4:Placement安装完成

6. 计算服务(Nova)安装

6.1 控制节点上Nova服务的安装与配置

6.1.1 安装Nova软件包
[root@controller ~]# yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy
				___________________________________________
					openstack-nova-api:Nova与外部的接口模块
					openstack-nova-conductor:Nova传导服务模块,提供数据库访问
					nova-scheduler:Nova调度服务模块,用以选择某台主机进行云主机创建
					openstack-nova-novncproxy:Nova的虚拟网络控制台(Virtual Network Console,VNC)代理模块,支持用户提供VNC访问云主机
# 安装nova时,会自动创建名为nova的系统用户和用户组
[root@controller ~]# cat /etc/passwd | grep nova
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin

[root@controller ~]# cat /etc/group | grep nova
nobody:x:99:nova
nova:x:162:nova
6.1.2 创建Nova的数据库并授权
  1. 进入数据库

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 54
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
  2. 新建 “nova-api”、“nova-cell0”、“nova”数据库

    MariaDB [(none)]> CREATE DATABASE nova_api;
    Query OK, 1 row affected (0.006 sec)
    
    MariaDB [(none)]> CREATE DATABASE nova_cell0;
    Query OK, 1 row affected (0.001 sec)
    
    MariaDB [(none)]> CREATE DATABASE nova;
    Query OK, 1 row affected (0.000 sec)
    
  3. 为数据库授权

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.004 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.000 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.001 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED  BY '000000';
    Query OK, 0 rows affected (0.000 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.001 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.006 sec)
    
6.1.3 修改Nova配置文件
  1. 去除配置文件中注释和空行

    # 1.备份配置文件
    [root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.bak
    [root@controller ~]# ls /etc/nova/nova*
    /etc/nova/nova.bak  /etc/nova/nova.conf
    
    # 2.去除注释和空行,生成新文件
    [root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.bak > /etc/nova/nova.conf 
    [root@controller ~]# cat /etc/nova/nova.conf 
    [DEFAULT]
    [api]
    [api_database]
    [barbican]
    [cache]
    [cinder]
    [compute]
    [conductor]
    [console]
    [consoleauth]
    [cors]
    [database]
    [devices]
    [ephemeral_storage_encryption]
    [filter_scheduler]
    [glance]
    [guestfs]
    [healthcheck]
    [hyperv]
    [ironic]
    
  2. 写入配置信息

    [root@controller ~]# vim /etc/nova/nova.conf 
    
    # 1.实现与数据库“nova_api”、和“nova”的连接
    [api_database]
    connection = mysql+pymysql://nova:000000@controller/nova_api
    [database]
    connection = mysql+pymysql://nova:000000@controller/nova
    
    # 2.实现与Keystone的交互
    [api]
    auth_strategy = keystone
    [keystone_authtoken]
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = project
    username = nova
    password = 000000
    
    # 3.实现与Placement的交互
    [placement]
    auth_url = http://controller:5000
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = project
    username = placement
    password = 000000
    region_name = RegionOne
    
    # 4.实现与Glance的交互
    [glance]
    api_servers = http://controller:9292
    
    # 5.配置锁路径
    [oslo_concurrency]
    lock_path = /var/lib/nova/tmp
    
    # 6.配置消息队列及防火墙信息
    [DEFAULT]
    enabled_apis = osapi_compute,metadata     # 指定启用的服务 API,多个 API 之间用逗号分隔。
    transport_url = rabbit://openstack:000000@controller:5672  # 格式为 rabbit://rabbitmq_username:password@该节点的地址或域名:5672
    my_ip = 192.168.223.130
    use_neutron = true
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
    # 7.配置VNC连接模式
    [vnc]
    enabled = true
    server_listen = $my_ip
    server_proxyclient_address = $my_ip
    
6.1.4 初始化Nova的数据库
  1. 初始化 “nova_api” 数据库

    [root@controller ~]# su nova -s /bin/sh -c "nova-manage api_db sync"
    # 执行完毕后不返回任何结果即为初始化成功!!!
    
  2. 创建 “cell1” 单元,该单元使用 “nova” 数据库

    [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1"
    
  3. 映射 “nova” 到 “cell0” 数据库,使 “cell0” 的表结构和 “nova” 的表结构保持一致

    [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"
    
  4. 初始化 “nova” 数据库,由于映射的存在,“cell0” 中同时会创建相同的数据表

    [root@controller ~]# su nova -s /bin/sh -c "nova-manage db sync"
    
6.1.5 验证单元是否都已正确注册
[root@controller ~]# nova-manage cell_v2 list_cells

在这里插入图片描述

6.1.6 Nova组件初始化
6.1.6-1 创建Nova用户并分配角色
  1. 导入环境变量模拟登录

    [root@controller ~]# source admin-login
    
  2. 在OpenStack云计算平台中创建用户“nova”

    [root@controller ~]# openstack user create --domain default --password 000000 nova
    +---------------------+----------------------------------+
    | Field               | Value                            |
    +---------------------+----------------------------------+
    | domain_id           | default                          |
    | enabled             | True                             |
    | id                  | 23acdca9cd1244bcb8aeb8d117a93db1 |
    | name                | nova                             |
    | options             | {}                               |
    | password_expires_at | None                             |
    +---------------------+----------------------------------+
    # 此处用户名和密码须与“nova.conf”文件“[keystone_authtoken]”中的一致
    
  3. 为用户“nova”分配“admin”角色

    [root@controller ~]# openstack role add --project project --user nova admin
    
6.1.6-2 创建Nova服务及端点
  1. 创建服务

    [root@controller ~]# openstack service create --name nova compute
    +---------+----------------------------------+
    | Field   | Value                            |
    +---------+----------------------------------+
    | enabled | True                             |
    | id      | d7cb10067a04401e95e8144fd5f8a3f3 |
    | name    | nova                             |
    | type    | compute                          |
    +---------+----------------------------------+
    
  2. 创建服务端点

    # 1.创建公众用户访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 61bb957c9e714f7db1856af5c61c115f |
    | interface    | public                           |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | d7cb10067a04401e95e8144fd5f8a3f3 |
    | service_name | nova                             |
    | service_type | compute                          |
    | url          | http://controller:8774/v2.1      |
    +--------------+----------------------------------+
    
    # 2.创建内部组件访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | da3afa1b7b01486cad2109a4bce58077 |
    | interface    | internal                         |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | d7cb10067a04401e95e8144fd5f8a3f3 |
    | service_name | nova                             |
    | service_type | compute                          |
    | url          | http://controller:8774/v2.1      |
    +--------------+----------------------------------+
    
    # 3.创建Admin用户访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne nova admin http://controller:8774/v2.1
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 4c67400ca28c4dba9aebb364f030bb57 |
    | interface    | admin                            |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | d7cb10067a04401e95e8144fd5f8a3f3 |
    | service_name | nova                             |
    | service_type | compute                          |
    | url          | http://controller:8774/v2.1      |
    +--------------+----------------------------------+
    
6.1.6-3 启动控制节点的Nova服务
[root@controller ~]# systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
[root@controller ~]# systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
6.1.7 检测控制节点的Nova服务
  1. 查看端口占用情况

    [root@controller ~]# netstat -nutpl | grep 877
    tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      10861/python2       
    tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      10861/python2       
    tcp6       0      0 :::8778                 :::*                    LISTEN      8975/httpd 
    
  2. 查看计算服务列表

    [root@controller ~]# openstack compute service list
    +----+----------------+------------+----------+---------+-------+----------------------------+
    | ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
    +----+----------------+------------+----------+---------+-------+----------------------------+
    |  4 | nova-conductor | controller | internal | enabled | up    | 2023-07-24T05:22:04.000000 |
    |  5 | nova-scheduler | controller | internal | enabled | up    | 2023-07-24T05:22:01.000000 |
    +----+----------------+------------+----------+---------+-------+----------------------------+
     # “nova-conductor”和“nava-scheduler”两个模块在控制节点上均处于开启(up)状态且正常显示更新时间即为服务正常
    

    若出现这两个模块状态为down时,检查“/var/log/nova/”目录下对应的模块日志

    ​ 例如 /var/log/nova/nova-scheduler.log 中出现以下情况

    ERROR oslo_service.service AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.

    ​ 意思是:使用AMQPLAIN身份验证机制拒绝登录。

    ​ 解决方法:检查nova配置文件/etc/nova/nova.conf 中 transport_url 的值格式 rabbit://rabbitmq_username:password@节点地址或域名:5672

6.2 计算节点上Nova服务的安装与配置

6.2.1 安装Nova软件包
  1. 安装

    [root@compute ~]# yum -y install openstack-nova-compute
    
  2. 检测

    [root@compute ~]# cat /etc/passwd | grep nova
    nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin
    
    [root@compute ~]# cat /etc/group | grep nova
    nobody:x:99:nova
    qemu:x:107:nova
    libvirt:x:985:nova
    nova:x:162:nova
    
6.2.2 修改配置文件
  1. 备份配置文件

    [root@compute ~]# cp /etc/nova/nova.conf  /etc/nova/nova.bak
    
  2. 去除注释和空行生成新文件

    [root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.bak > /etc/nova/nova.conf 
    
  3. 编辑配置文件内容

    [root@compute ~]# vim /etc/nova/nova.conf 
    
    # 1.实现Keystone交互
    [api]
    auth_strategy = keystone
    
    [keystone_authtoken]
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = project
    username = nova
    password = 000000
    
    # 2. 实现与Placement的交互
    [placement]
    auth_url = http://controller:5000
    auth_type = password 
    project_domain_name = Default
    user_domain_name = Default
    project_name = project
    username = placement
    password = 000000
    region_name = RegionOne
    
    # 3.实现与Glance的交互
    [glance]
    api_servers = http://controller:9292
    
    # 4.配置锁路径
    [oslo_concurrency]
    lock_path = /var/lib/nova/tmp
    
    # 5.配置消息队列及防火墙信息
    [DEFAULT]
    enabled_apis = osapi_compute,metadata
    transport_url = rabbit://openstack:000000@controller:5672
    my_ip = 192.168.223.131
    use_neutron = true
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
    # 6.配置VNC连接模式
    [vnc]
    enabled = true
    server_listen = 0.0.0.0
    server_proxyclient_address = $my_ip
    novncproxy_base_url = http://controller:6080/vnc_auto.html      # 仅计算节点需要该配置
    
    # 7.设置虚拟化类型为QEMU
    [libvirt]
    virt_type = qemu
    
6.2.3 启动计算节点的Nova服务
[root@compute ~]# systemctl enable libvirtd openstack-nova-compute
[root@compute ~]# systemctl start libvirtd openstack-nova-compute
# libvirtd是管理虚拟化平台的开源接口应用,提供对KVM、Xen、VMware ESX、QEMU和其他虚拟化程序的统一管理接口服务

6.3 发现计算节点并检验服务

6.3.1 发现计算节点
  1. 导入环境变量模拟登录

    [root@controller ~]# . admin-login 
    
  2. 发现新的计算节点

    # 切换到nova用户执行发现未注册计算节点的命令
    [root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose"
    Found 2 cell mappings.
    Getting computes from cell 'cell1': 30253634-2832-4b44-93d1-88e6f02cbe8a
    Checking host mapping for compute host 'compute': c5869d42-3625-47a7-8a61-5f476f38b3c4
    Creating host mapping for compute host 'compute': c5869d42-3625-47a7-8a61-5f476f38b3c4
    Found 1 unmapped computes in cell: 30253634-2832-4b44-93d1-88e6f02cbe8a
    Skipping cell0 since it does not contain hosts.
    # 发现计算节点后,计算节点将自动与“cell1”单元形成关联,即可通过计算节点进行管理
    
  3. 设置自动发现

    ​ OpenStack中可有多个计算节点存在,每增加一个新的节点就需要执行一次发现命令,可通过配置文件中设置每隔一段时间自动执行一次发现命令来减少工作量。

    [root@controller ~]# vim /etc/nova/nova.conf 
    
    [scheduler]
    discover_hosto_in_cells_interval = 60    # 每隔60秒自动执行一次发现命令
    
    [root@controller ~]# systemctl restart openstack-nova-api     # 重启服务使配置生效
    
6.3.2 验证Nova服务
# 方法一:查看计算服务中各个模块的服务状态
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  4 | nova-conductor | controller | internal | enabled | up    | 2023-07-24T07:45:40.000000 |
|  5 | nova-scheduler | controller | internal | enabled | up    | 2023-07-24T07:45:47.000000 |
|  7 | nova-compute   | compute    | nova     | enabled | up    | 2023-07-24T07:45:41.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+
# 各个模块(Binary)的状态(State)为启用(up)即为服务正常
# 方法二:查看OpenStack现有的服务和对应的端点列表
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| glance    | image     | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   admin: http://controller:5000/v3      |
|           |           | RegionOne                               |
|           |           |   public: http://controller:5000/v3     |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:5000/v3   |
|           |           |                                         |
| placement | placement | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           |                                         |
| nova      | compute   | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+
# 可显示出OpenStack云计算平台中已有的4个服务的名称(Name)、服务类型(Type)、服务端点(Endpoints)信息
# 方法三、使用Nova状态检测工具“nova-status”
[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+
# 各服务检查结果(Result)显示“Success”即为运行正常

6.4 Nova安装自检工单

控制节点创建快照5:Nova安装完成

计算节点创建快照2:Nova安装完成

7. 网络服务(Neutron)安装

7.1 网络初始环境准备(控制节点、计算节点)

7.1.1 将网卡设置为混杂模式
  1. 将外网网卡设置为混杂模式

    [root@controller ~]# ifconfig ens33 promisc
    
    [root@compute ~]# ifconfig ens33 promisc
    
  2. 查看网卡信息

    ​ 成功设置为混杂模式后,网卡信息中会出现 ”PROMISC“ 字样,凡是通过该网卡的数据(不论接收方是否是该网卡),均可被该网卡接收

    [root@controller ~]# ip a
    2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:4c:30:55 brd ff:ff:ff:ff:ff:ff
        inet 192.168.182.136/24 brd 192.168.182.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::8b37:6ff4:79ac:9a35/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    
    [root@compute ~]# ip a
    2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:50:56:28:ba:e8 brd ff:ff:ff:ff:ff:ff
        inet 192.168.182.137/24 brd 192.168.182.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::8b37:6ff4:79ac:9a35/64 scope link tentative noprefixroute dadfailed 
           valid_lft forever preferred_lft forever
        inet6 fe80::2a6c:ca:3ac3:fa4/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    
  3. 设置开机自动生效

    # 在环境变量配置文件末尾写入命令,开机后将自动执行
    [root@controller ~]# vim /etc/profile
    ifconfig ens33 promisc
    
    [root@compute ~]# vim /etc/profile
    ifconfig ens33 promisc
    
7.1.2 加载桥接模式防火墙模块
  1. 编辑配置文件

    [root@compute ~]# vim /etc/sysctl.conf 
    
    [root@controller ~]# vim /etc/sysctl.conf 
    
    # 文件最后写入以下信息
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    
  2. 加载 ”br_netfilter“ 模块

    [root@controller ~]# modprobe br_netfilter
    
    [root@compute ~]# modprobe br_netfilter
    
  3. 检查模块加载情况

    [root@controller ~]# sysctl -p
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    
    [root@compute ~]# sysctl -p
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    

7.2 安装与配置控制节点上的Neutron服务

7.2.1 安装Neutron软件包
  1. 安装

    [root@controller ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge
    
  2. 查看用户和用户组

    [root@controller ~]# cat /etc/passwd | grep neutron
    neutron:x:981:975:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin
    
    [root@controller ~]# cat /etc/group | grep neutron
    neutron:x:975:
    
7.2.2 创建Neutron数据库并授权
  1. 进入数据库

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 115
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
  2. 新建neutron数据库

    MariaDB [(none)]> CREATE DATABASE neutron;
    Query OK, 1 row affected (0.001 sec)
    
  3. 为数据库授权

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.013 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.001 sec)
    
7.2.3 修改配置文件
  1. 配置Neutron组件信息

    # 1.备份配置文件
    [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.bak
    
    # 2.去除空行和注释,生成新文件
    [root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.bak > /etc/neutron/neutron.conf
    
    [root@controller ~]# cat /etc/neutron/neutron.conf
    [DEFAULT]
    [cors]
    [database]
    [keystone_authtoken]
    [oslo_concurrency]
    [oslo_messaging_amqp]
    [oslo_messaging_kafka]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_middleware]
    [oslo_policy]
    [privsep]
    [ssl]
    
    # 3.编辑配置
    [root@controller ~]# vim /etc/neutron/neutron.conf
    
    [DEFAULT]
    core_plugin = ml2
    service_plugins =
    transport_url = rabbit://openstack:000000@controller:5672    #xxxxx
    auth_strategy = keystone
    notify_nova_on_port_status_changes = true
    notify_nova_on_port_data_changes = true
    
    [database]
    connection = mysql+pymysql://neutron:000000@controller/neutron
    
    [keystone_authtoken]
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = project
    username = neutron
    password = 000000
    
    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp
    
    [nova]
    auth_url = http://controller:5000
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = project
    username = nova
    password = 000000
    region_name = RegionOne
    server_proxyclient_address = 192.168.223.130
    
    # 增加配置与Nova交互
    [neutron]
    auth_url = http://192.168.223.130:5000
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = project
    username = neutron
    password = 000000
    service_metadata_proxy = true
    metadata_proxy_shared_secret = METADATA_SECRET
    
  2. 修改二层模块插件(ML2Plugin)的配置文件

    # 1.备份配置文件
    [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.bak
    
    # 2.去除空行和注释生成新文件
    [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
    
    [root@controller ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini 
    [DEFAULT]
    
    # 3.修改配置文件
    [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini 
    [DEFAULT]
    [ml2]
    type_drivers = flat
    tenant_network_types = 
    mechanism_drivers = linuxbridge
    extension_drivers = port_security
    
    [ml2_type_flat]
    flat_networks = provider
    
    [securitygroup]
    enable_ipset = true
    
    # 4.启用ML2插件
    [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    # 只有在“/etc/neutron/”下的插件才能生效,因此将“ml2_conf.ini”映射为“/etc/neutron/”下的“plugin.ini”文件,使ML2插件启用
    
  3. 修改网桥代理(Linuxbridge_agent)的配置文件

    # 1.备份配置文件
    [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.bak
    
    # 2.去除空行和注释生成新文件
    [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    
    [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    [DEFAULT]
    
    # 3.修改配置文件
    [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    [DEFAULT]
    [linux_bridge]
    physical_interface_mappings = provider:ens33
    # 这里的“provider”就是ML2插件中的“flat_networks”的值,provider对应外网网卡。
    
    [vxlan]
    enable_vxlan = false
    
    [securitygroup]
    enable_security_group = true
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    
  4. 修改DHCP代理(dhcp-agent)配置文件

    # 1.备份配置文件
    [root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.bak
    
    # 2.去除配置文件中的注释和空行
    [root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.bak > /etc/neutron/dhcp_agent.ini
    
    [root@controller ~]# cat /etc/neutron/dhcp_agent.ini
    [DEFAULT]
    
    # 3.修改配置文件
    [root@controller ~]# vim /etc/neutron/dhcp_agent.ini
    [DEFAULT]
    interface_driver = linuxbridge
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    enable_isolated_metadata = true
    
  5. 修改元数据代理(neutron-metadata-agent)配置文件

    # 配置Nova主机地址和元数据加密方式
    [root@controller ~]# vim /etc/neutron/metadata_agent.ini 
    [DEFAULT]
    nova_metadata_host = controller
    metadata_proxy_shared_secret = METADATA_SECRET
    
  6. 修改Nova配置文件

    [root@controller ~]# vim /etc/nova/nova.conf 
    [neutron]
    auth_url = http://controller:5000
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = project
    username = neutron
    password = 000000
    service_metadata_proxy = true
    metadata_proxy_shared_secrect = METADATA_SECRET
    
7.2.4 同步数据库
[root@controller ~]# su neutron -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"
# 验证是否同步成功
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron                       |
+-----------------------------------------+
| address_scopes                          |
| agents                                  |
| alembic_version                         |
| allowedaddresspairs                     |
| arista_provisioned_nets                 |
| arista_provisioned_tenants              |
| arista_provisioned_vms                  |
| auto_allocated_topologies               |
| bgp_peers                               |
| bgp_speaker_dragent_bindings            |
| bgp_speaker_network_bindings            |
| bgp_speaker_peer_bindings               |
| bgp_speakers                            |
| brocadenetworks                         |
| brocadeports                            |
| cisco_csr_identifier_map                |
| cisco_hosting_devices                   |
| cisco_ml2_apic_contracts                |
| cisco_ml2_apic_host_links               |
7.2.5 Neutron组件初始化
7.2.5.1 创建Neutron用户并分配角色
  1. 导入环境变量模拟登录

    [root@controller ~]# . admin-login 
    
  2. 在OpenStack云计算平台中创建用户 “neutron”

    [root@controller ~]# openstack user create --domain default --password 000000 neutron
    +---------------------+----------------------------------+
    | Field               | Value                            |
    +---------------------+----------------------------------+
    | domain_id           | default                          |
    | enabled             | True                             |
    | id                  | d263edcf60b4441c9d47354ec2384147 |
    | name                | neutron                          |
    | options             | {}                               |
    | password_expires_at | None                             |
    +---------------------+----------------------------------+
    # 此处的用户名和密码必须与neutron.conf中“[keystone_authtoken]”一致
    
  3. 为用户 “neutron” 分配 “admin” 角色

    [root@controller ~]# openstack role add --project project --user neutron admin
    
7.2.5.2 创建Neutron服务及端点
  1. 创建服务

    [root@controller ~]# openstack service create --name neutron network
    +---------+----------------------------------+
    | Field   | Value                            |
    +---------+----------------------------------+
    | enabled | True                             |
    | id      | e5405504e48440469bbe448e3eb710d1 |
    | name    | neutron                          |
    | type    | network                          |
    +---------+----------------------------------+
    
  2. 创建服务端点

    # 1.创建公众用户访问端点
    [root@controller ~]# openstack endpoint create --region RegionOne neutron public http://controller:9696
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | ebc4c55594aa47f1881180d56fc93f89 |
    | interface    | public                           |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | e5405504e48440469bbe448e3eb710d1 |
    | service_name | neutron                          |
    | service_type | network                          |
    | url          | http://controller:9696           |
    +--------------+----------------------------------+
    
    # 2.创建内部组件访问端点
    [root@controller ~]# openstack endpoint create --region RegionOne neutron internal http://controller:9696
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 15f15d559c254fdb929442fef81ecb85 |
    | interface    | internal                         |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | e5405504e48440469bbe448e3eb710d1 |
    | service_name | neutron                          |
    | service_type | network                          |
    | url          | http://controller:9696           |
    +--------------+----------------------------------+
    
    # 3.创建Admin用户访问端点
    [root@controller ~]# openstack endpoint create --region RegionOne neutron admin http://controller:9696
    +--------------+----------------------------------+
    | Field        | Value                            |
    +--------------+----------------------------------+
    | enabled      | True                             |
    | id           | 3473e4bd3190471e891bea2f156739f2 |
    | interface    | admin                            |
    | region       | RegionOne                        |
    | region_id    | RegionOne                        |
    | service_id   | e5405504e48440469bbe448e3eb710d1 |
    | service_name | neutron                          |
    | service_type | network                          |
    | url          | http://controller:9696           |
    +--------------+----------------------------------+
    
7.2.6 启动控制节点上的Neutron访问
  1. 重启Nova服务

    [root@controller ~]# systemctl restart openstack-nova-api
    
  2. 启动Neutron服务

    依次启动Neutron服务组件、网桥代理、DHCP代理、元数据代理

    [root@controller ~]# systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
    
    [root@controller ~]# systemctl start neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
    
7.2.7 检测控制节点上的Neutron服务
  1. 查看端口占用情况

    [root@controller ~]# netstat -tnlup | grep 9696
    tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      40342/server.log    
    
  2. 检验服务端点

    [root@controller ~]# curl http://controller:9696
    {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://controller:9696/v2.0/", "rel": "self"}]}]}
    
  3. 查看服务运行情况

    [root@controller ~]# systemctl status neutron-server
    ● neutron-server.service - OpenStack Neutron Server
       Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled)
       Active: active (running) since 二 2023-07-25 10:32:56 CST; 7min ago
     Main PID: 40342 (/usr/bin/python)
        Tasks: 6
    

7.3 安装与配置计算节点上的Neutron服务

7.3.1 安装Neutron软件包
  1. 安装软件包

    [root@compute ~]# yum install openstack-neutron-linuxbridge
    
  2. 查看用户和用户组信息

    [root@compute ~]# cat /etc/passwd | grep neutron
    neutron:x:986:980:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin
    
    [root@compute ~]# cat /etc/group | grep neutron
    neutron:x:980:
    
7.3.2 修改Neutron配置文件
  1. 配置Neutron组件信息

    # 1.备份配置文件
    [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.bak
    
    # 2.去除注释和空行生成新文件
    [root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.bak > /etc/neutron/neutron.conf 
    
    [root@compute ~]# cat /etc/neutron/neutron.conf 
    [DEFAULT]
    [cors]
    [database]
    [keystone_authtoken]
    [oslo_concurrency]
    [oslo_messaging_amqp]
    [oslo_messaging_kafka]
    [oslo_messaging_notifications]
    [oslo_messaging_rabbit]
    [oslo_middleware]
    [oslo_policy]
    [privsep]
    [ssl]
    
    # 3.编辑配置
    [root@compute ~]# vim /etc/neutron/neutron.conf 
    [DEFAULT]
    transport_url = rabbit://openstack:000000@controller:5672
    auth_strategy = keystone
    
    [keystone_authtoken]
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = project
    username = neutron
    password = 000000
    
    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp
    
  2. 修改网桥代理(Linuxbridge_agent)的配置文件

    # 1.备份配置文件
    [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.bak
    
    # 2.去除空行和注释生成新文件
    [root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
    
    [root@compute ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
    [DEFAULT]
    
    # 3.编辑配置
    [root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
    [DEFAULT]
    [linux_bridge]
    physical_interface_mappings = provider:ens33   # 对应外网网卡
    
    [vxlan]
    enable_vxlan = false
    
    [securitygroup]
    enable_security_group = true
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    
  3. 修改Nova配置文件

    [root@compute ~]# vim /etc/nova/nova.conf 
    [DEFAULT]
    vif_plugging_is_fatal = false
    vif_plugging_timout = 0
    
    [neutron]
    auth_url = http://controller:5000
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = project
    username = neutron
    password = 000000
    
7.3.3 启动计算节点的Neutron服务
  1. 重启计算节点的Nova服务

    [root@compute ~]# systemctl restart openstack-nova-compute
    
  2. 启动Neutron网桥代理并设置开机自启动

    [root@compute ~]# systemctl enable neutron-linuxbridge-agent
    
    [root@compute ~]# systemctl start neutron-linuxbridge-agent
    

7.4 检测Neutron服务

  1. 查看网络代理服务列表

    [root@controller ~]# openstack network agent list
    +-----------+--------------------+------------+-------------------+-------+-------+---------------------------+
    |   ID      | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
    +-----------+--------------------+------------+-------------------+-------+-------+---------------------------+
    |   ---     | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
    |   ---     | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
    |   ---     | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent |
    |   ---     | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
    +-----------+--------------------+------------+-------------------+-------+-------+---------------------------+
    # 若显示以上四个服务的生命状态(Alive)为笑脸符号“:-)”,状态列(Start)均为开启(UP)则说明Neutron的代理运行正常
    
  2. 用Neutron状态检测工具检测

    [root@controller ~]# neutron-status upgrade check
    +---------------------------------------------------------------------+
    | Upgrade Check Results                                               |
    +---------------------------------------------------------------------+
    | Check: Gateway external network                                     |
    | Result: Success                                                     |
    | Details: L3 agents can use multiple networks as external gateways.  |
    +---------------------------------------------------------------------+
    | Check: External network bridge                                      |
    | Result: Success                                                     |
    | Details: L3 agents are using integration bridge to connect external |
    |   gateways                                                          |
    +---------------------------------------------------------------------+
    | Check: Worker counts configured                                     |
    | Result: Warning                                                     |
    | Details: The default number of workers has changed. Please see      |
    |   release notes for the new values, but it is strongly              |
    |   encouraged for deployers to manually set the values for           |
    |   api_workers and rpc_workers.                                      |
    +---------------------------------------------------------------------+
    # Gateway external network 和 External network bridge 两项检查结果为Success即为Neutron运行正常
    

7.5 Neutron服务自检工单

控制节点快照6:Neutron配置完成

计算节点快照3:Neutron配置完成

8.仪表盘服务(Dashboard)安装

8.1 安装与配置Dashboard服务(计算节点)

8.1.1 安装软件包
[root@compute ~]# yum install openstack-dashboard
8.1.2 配置Dashboard服务
  1. 打开配置文件

    vi /etc/openstack-dashboard/local_settings
    
  2. 配置Web服务器基本信息

    # 1.配置允许从任意主机访问Web服务(第39行)
    ALLOWED_HOSTS = ['*']
    
    # 2.配置用于指定控制节点的位置(第119行)
    OPENSTACK_HOST = "controller"
    
    # 3.配置用于将当前时区指向“亚洲/上海”(第148行)
    TIME_ZONE= "Asia/Shanghai"
    
  3. 配置缓存服务(第105行)

    SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    CACHES = {
       'default':{
             'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
             'LOCATION':'controller:11211',
       }
     }
    
  4. 启用对多域的支持(新增内容)

    OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True  # 允许使用多个域
    
  5. 指定OpenStack组件的版本(新增内容)

    OPENSTACK_API_VERASIONS = {
    "identity":3,
    "image":2,
    "volume":3,
    }
    
  6. 设置通过Dashboard创建的用户所属的默认域(新增内容)

    OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    
  7. 设置通过Dashboard创建的用户默认角色为 “user”(新增内容)

    OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
    
  8. 设置如何使用Neutron网络(第132行)

    132 OPENSTACK_NEUTRON_NETWORK = {
    133     'enable_auto_allocated_network': False,
    134     'enable_distributed_router': False,
    135     'enable_fip_topology_check': False,			# 修改
    136     'enable_ha_router': False,
    137     'enable_ipv6': True,						# 修改
    138     # TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
    139     # enable_quotas has the different default value here.
    140     'enable_quotas': True,						# 修改
    141     'enable_rbac_policy': True,					# 修改
    142     'enable_router': True,						# 修改
    143 
    144     'default_dns_nameservers': [],
    145     'supported_provider_types': ['*'],
    146     'segmentation_id_range': {},
    147     'extra_provider_types': {},
    148     'supported_vnic_types': ['*'],
    149     'physical_networks': [],
    150 
    151 }
    

8.2 发布Dashboard服务

8.2.1 重建Dashboard的Web应用配置文件
  1. 进入Dashboard网站目录

    [root@compute openstack-dashboard]# cd /usr/share/openstack-dashboard
    
    [root@compute openstack-dashboard]# ll
    总用量 16
    -rwxr-xr-x  1 root root  831 517 2021 manage.py
    -rw-r--r--  2 root root  435 517 2021 manage.pyc
    -rw-r--r--  2 root root  435 517 2021 manage.pyo
    drwxr-xr-x 18 root root 4096 725 14:31 openstack_dashboard
    drwxr-xr-x 10 root root  114 725 14:31 static
    
  2. 编译生成Dashboard的Web服务配置文件

    [root@compute openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf 
    
    [root@compute openstack-dashboard]# cat /etc/httpd/conf.d/openstack-dashboard.conf 
    
    <VirtualHost *:80>
    
        ServerAdmin [email protected]
        ServerName  openstack_dashboard
    
        DocumentRoot /usr/share/openstack-dashboard/       # DocumentRoot为网站主目录,可见其已经指向Dashboard的网站目录
    
        LogLevel warn
        ErrorLog /var/log/httpd/openstack_dashboard-error.log
        CustomLog /var/log/httpd/openstack_dashboard-access.log combined
    
        WSGIScriptReloading On
        WSGIDaemonProcess openstack_dashboard_website processes=3
        WSGIProcessGroup openstack_dashboard_website
        WSGIApplicationGroup %{GLOBAL}
        WSGIPassAuthorization On
    
        WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
    
        <Location "/">
            Require all granted
        </Location>
    
        Alias /static /usr/share/openstack-dashboard/static
        <Location "/static">
            SetHandler None
        </Location>
    </Virtualhost>
    
  3. 查看组件API信息

    [root@compute ~]# curl http://controller:5000/v3
    {"version": {"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links
    
    [root@compute ~]# curl http://controller:9292
    {"versions": [{"status": "CURRENT", "id": "v2.9", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.7", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.6", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.5", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.4", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.3", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.2", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.1", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.0", "links": [{"href": "http://controller:9292/v2/", "rel": "self"}]}]}
    
8.2.2 建立策略文件的软连接

​ 在 “/etc/openstack-dashboard” 中内置了一些策略文件,他们是 Dashboard 与其他组件交互时的默认策略,使用以下方式查看:

[root@compute ~]# ls /etc/openstack-dashboard/
cinder_policy.json  glance_policy.json  keystone_policy.json  local_settings  neutron_policy.json  nova_policy.d  nova_policy.json

为使这些策略文件生效,需要将其放置到Dashboard项目中。以下采用软连接的方式将策略文件放入项目:

[root@compute ~]# ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf

查看Dashboard网站目录

[root@compute ~]# ll /usr/share/openstack-dashboard/openstack_dashboard
总用量 240
drwxr-xr-x  3 root root  4096 725 14:31 api
lrwxrwxrwx  1 root root    24 725 19:40 conf -> /etc/openstack-dashboard    # 上一步骤创建的软连接
-rw-r--r--  1 root root  4192 517 2021 context_processors.py
8.2.3 启动Apache服务器
[root@compute ~]# systemctl enable httpd
[root@compute ~]# systemctl start httpd

8.3 检测Dashboard服务

  1. 登录系统

    在本地计算机浏览器访问计算节点的IP地址http://192.168.223.131/,进入登录界面

    使用 ”Default“ 域的 ”admin“ 用户进行登录,密码为 ”000000“

    OpenStack搭建_第6张图片

  2. 查看镜像

    在【概况】界面侧边栏选择【计算】–>【镜像】选项,进入【Images】界面,可以看到前面上传的 ”cirros“ 镜像。

    OpenStack搭建_第7张图片

    OpenStack搭建_第8张图片

8.4 Dashboard服务自检工单

控制节点创建快照7:Dashboard安装完成

计算节点创建快照4:Dashboard安装完成

9. 块存储服务(Cinder)安装

9.1 安装与配置控制节点上的Cinder服务

9.1.1 安装Cinder软件包
  1. 安装软件包

    [root@controller ~]# yum -y install openstack-cinder
    
  2. 查看用户信息

    [root@controller ~]# cat /etc/passwd | grep cinder
    cinder:x:165:165:OpenStack Cinder Daemons:/var/lib/cinder:/sbin/nologin
    
    [root@controller ~]# cat /etc/group | grep cinder
    nobody:x:99:nova,cinder
    cinder:x:165:cinder
    
9.1.2 创建Cinder的数据库并授权
  1. 进入数据库

    [root@controller ~]# mysql -uroot -p000000
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 34
    Server version: 10.3.20-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> 
    
  2. 新建 “cinder” 数据库

    MariaDB [(none)]> CREATE DATABASE cinder;
    Query OK, 1 row affected (0.000 sec)
    
  3. 为数据库授权

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.001 sec)
    
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000';
    Query OK, 0 rows affected (0.000 sec)
    
9.1.3 修改Cinder配置文件
  1. 备份配置文件

    [root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.bak 
    
  2. 去除注释和空行生成新文件

    [root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.bak > /etc/cinder/cinder.conf
    
  3. 编辑配置

    [root@controller ~]# vim /etc/cinder/cinder.conf
    
    # 1.实现与数据库“cinder”的连接
    [database]
    connection = mysql+pymysql://cinder:000000@controller/cinder
    
    # 2. 实现与Keystone的交互
    [DEFAULT]
    auth_strategy = keystone
    
    [keystone_authtoken]
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = project
    username = cinder
    password = 000000
    
    # 3.配置锁路径
    [oslo_concurrency]
    lock_path = /var/lib/cinder/tmp
    
    # 4.连接消息队列
    [DEFAULT]
    transport_url = rabbit://openstack:000000@controller:5672
    
9.1.4 修改Nova配置文件
[root@controller ~]# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
9.1.5 同步数据库
[root@controller ~]# su cinder -s /bin/sh -c "cinder-manage db sync"
Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".
# 检测是否同步成功
MariaDB [(none)]> USE cinder;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [cinder]> SHOW TABLES;
+----------------------------+
| Tables_in_cinder           |
+----------------------------+
| attachment_specs           |
| backup_metadata            |
| backups                    |
| cgsnapshots                |
| clusters                   |
| consistencygroups          |
| driver_initiator_data      |
| encryption                 |
| group_snapshots            |
| group_type_projects        |
| group_type_specs           |
| group_types                |
| group_volume_type_mapping  |
| groups                     |
| image_volume_cache_entries |
| messages                   |
| migrate_version            |
| quality_of_service_specs   |
| quota_classes              |
| quota_usages               |
| quotas                     |
| reservations               |
| services                   |
| snapshot_metadata          |
| snapshots                  |
| transfers                  |

9.1.6 Cinder组件初始化
9.1.6.1 创建Cinder用户并分配角色
  1. 导入环境变量模拟登录

    [root@controller ~]# . admin-login 
    
  2. 在OpenStack云计算平台中创建用户 “cinder”

    [root@controller ~]# openstack user create --domain default --password 000000 cinder
    +---------------------+----------------------------------+
    | Field               | Value                            |
    +---------------------+----------------------------------+
    | domain_id           | default                          |
    | enabled             | True                             |
    | id                  | 3c91d6707432479cb267e28ae711dea0 |
    | name                | cinder                           |
    | options             | {}                               |
    | password_expires_at | None                             |
    +---------------------+----------------------------------+
    
  3. 为用户 “cinder” 分配 “admin” 角色

    [root@controller ~]# openstack role add --project project --user cinder admin
    
9.1.6.2 创建 CInder 服务及服务端点
  1. 创建服务

    [root@controller ~]# openstack service create --name cinderv3 volumev3
    +---------+----------------------------------+
    | Field   | Value                            |
    +---------+----------------------------------+
    | enabled | True                             |
    | id      | 9d6daf3b156d4409a30085c126788f58 |
    | name    | cinderv3                         |
    | type    | volumev3                         |
    +---------+----------------------------------+
    
  2. 创建服务端点

    # 1.创建公众用户访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    +--------------+------------------------------------------+
    | Field        | Value                                    |
    +--------------+------------------------------------------+
    | enabled      | True                                     |
    | id           | 34a05c48307a444286ded10d2825dd4b         |
    | interface    | public                                   |
    | region       | RegionOne                                |
    | region_id    | RegionOne                                |
    | service_id   | 9d6daf3b156d4409a30085c126788f58         |
    | service_name | cinderv3                                 |
    | service_type | volumev3                                 |
    | url          | http://controller:8776/v3/%(project_id)s |
    +--------------+------------------------------------------+
    
    # 2.创建内部组件访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    +--------------+------------------------------------------+
    | Field        | Value                                    |
    +--------------+------------------------------------------+
    | enabled      | True                                     |
    | id           | 36493c95564a4a8d9615ef8dd4ab6426         |
    | interface    | internal                                 |
    | region       | RegionOne                                |
    | region_id    | RegionOne                                |
    | service_id   | 9d6daf3b156d4409a30085c126788f58         |
    | service_name | cinderv3                                 |
    | service_type | volumev3                                 |
    | url          | http://controller:8776/v3/%(project_id)s |
    +--------------+------------------------------------------+
    
    # 3. 创建Admin用户访问的端点
    [root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    +--------------+------------------------------------------+
    | Field        | Value                                    |
    +--------------+------------------------------------------+
    | enabled      | True                                     |
    | id           | fb640f6a1b6747a9be5462d1a09e1f2f         |
    | interface    | admin                                    |
    | region       | RegionOne                                |
    | region_id    | RegionOne                                |
    | service_id   | 9d6daf3b156d4409a30085c126788f58         |
    | service_name | cinderv3                                 |
    | service_type | volumev3                                 |
    | url          | http://controller:8776/v3/%(project_id)s |
    +--------------+------------------------------------------+
    
9.1.7 启动控制节点上的 Cinder 服务
  1. 重启 Nova 服务

    [root@controller ~]# systemctl restart openstack-nova-api
    
  2. 设置 “cinder-api” 和 “cinder-scheduler” 模块开机自启动

    [root@controller ~]# systemctl enable openstack-cinder-api openstack-cinder-scheduler
    
  3. 启动 Cinder 服务

    [root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-scheduler
    
9.1.8 检测控制节点上的 Cinder 服务
  1. 查看端口占用情况

    [root@controller ~]# netstat -nutpl | grep 8776
    tcp        0      0 0.0.0.0:8776            0.0.0.0:*               LISTEN      5307/python2       
    
  2. 查看存储服务列表

    [root@controller ~]# openstack volume service list
    +------------------+------------+------+---------+-------+----------------------------+
    | Binary           | Host       | Zone | Status  | State | Updated At                 |
    +------------------+------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller | nova | enabled | up    | 2023-07-26T14:30:00.000000 |
    +------------------+------------+------+---------+-------+----------------------------+
    

9.2 搭建存储节点

9.2.1 为计算节点增加硬盘

添加一块新硬盘(创建新虚拟磁盘),磁盘类型为SCSI(推荐),最大磁盘大小为100GB,磁盘文件名为cinder.vmdk

9.2.2 创建卷组
  1. 查看当前所有硬盘(块设备)挂载信息

    [root@compute ~]# lsblk
    NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda               8:0    0  200G  0 disk 
    ├─sda1            8:1    0    1G  0 part /boot
    └─sda2            8:2    0  198G  0 part 
      ├─centos-root 253:0    0  190G  0 lvm  /
      └─centos-swap 253:1    0    8G  0 lvm  [SWAP]
    sdb               8:16   0  100G  0 disk 			# 新加的硬盘,还未分区和挂载
    sr0              11:0    1 1024M  0 rom  
    
  2. 创建LVM物理卷组

    # 1.将硬盘初始化为物理卷
    [root@compute ~]# pvcreate /dev/sdb
      Physical volume "/dev/sdb" successfully created.
    
    # 2.将物理卷归并为卷组
    [root@compute ~]# vgcreate cinder-volumes /dev/sdb
      Volume group "cinder-volumes" successfully created.
    
    # 3.配置LVM卷组扫描的设备(第130行)
    [root@compute ~]# vim /etc/lvm/lvm.conf 
    devices {
    filter = [ "a/sdb/","r/.*/"]     # 接受“/dev/sdb”磁盘并拒绝其他设备(“a”表示接受 “r”表示拒绝)
    .....
    }
    
  3. 启动LVM元数据服务

    [root@compute ~]# systemctl enable lvm2-lvmetad
    [root@compute ~]# systemctl start lvm2-lvmetad
    
9.2.3 安装和配置存储节点
  1. 安装软件包

    [root@compute ~]# yum install -y openstack-cinder targetcli python-keystone
    
  2. 修改Cinder配置文件

    # 1.备份配置文件
    [root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf-bak
    
    # 2.拷贝控制节点的配置文件
    [root@compute ~]# scp root@controller:/etc/cinder/cinder.conf /etc/cinder/cinder.conf
    root@controller's password: 
    cinder.conf                                            100%  880   840.8KB/s   00:00   
    
    # 3.增加配置
    [root@compute ~]# vim /etc/cinder/cinder.conf
    [DEFAULT]
    auth_strategy = keystone
    transport_url = rabbit://openstack:000000@controller:5672
    glance_api_servers = http://controller:9292
    enabled_backends = lvm
    
    [lvm]
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
    volume_group = cinder-volumes
    target_protocol = iscsi
    target_helper = lioadm
    
9.2.4 启动计算节点上的Cinder服务
[root@compute ~]# systemctl enable openstack-cinder-volume target
[root@compute ~]# systemctl start openstack-cinder-volume target

9.3 检测 Cinder 服务

  1. 查看存储服务列表

    [root@controller ~]# . admin-login 
    [root@controller ~]# openstack volume service list
    +------------------+-------------+------+---------+-------+----------------------------+
    | Binary           | Host        | Zone | Status  | State | Updated At                 |
    +------------------+-------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller  | nova | enabled | up    | 2023-07-27T04:56:18.000000 |
    | cinder-volume    | compute@lvm | nova | enabled | up    | 2023-07-27T04:56:14.000000 |
    +------------------+-------------+------+---------+-------+----------------------------+
    # 在控制节点中这两个模块处于启动(up)状态(State)
    
  2. 通过 Dashboard 查看卷情况

    登录到OpenStack后,如果Cinder服务正常,则在左侧导航栏中会出现【卷】选项,在【概况】界面可以看到 “卷”、“卷快照”、“卷存储”三个饼图。

    OpenStack搭建_第9张图片

9.4 用 Cinder 创建卷

9.4.1 使用命令模式创建卷
 # 在控制节点发起命令,创建一个8GB的卷,命名为“volume1”
 [root@controller ~]# openstack volume create --size 8 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2023-07-27T05:26:22.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | ac84e69f-23e4-4856-ad64-c9cd0bbe6498 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 8                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | __DEFAULT__                          |
| updated_at          | None                                 |
| user_id             | 157c2fe27dc54c8baa467a035274ec00     |
+---------------------+--------------------------------------+
# 查看卷列表
[root@controller ~]# openstack volume list
+--------------------------------------+---------+-----------+------+-------------+
| ID                                   | Name    | Status    | Size | Attached to |
+--------------------------------------+---------+-----------+------+-------------+
| ac84e69f-23e4-4856-ad64-c9cd0bbe6498 | volume1 | available |    8 |             |
+--------------------------------------+---------+-----------+------+-------------+
9.4.2 使用 Dashboard 创建卷

OpenStack搭建_第10张图片

OpenStack搭建_第11张图片

OpenStack搭建_第12张图片

9.5 Cinder 服务自检工单

控制节点创建快照8:Cinder安装完成

计算节点创建快照5:Cinder安装完成

10. 虚拟网络管理

​ 云主机是通过虚拟网络挂载在端口之上的,它无法脱离网络独立存在。因此创建云主机之前应该先创建承载它的虚拟网络。

10.1 网络管理

​ OpenStack 的网络是一个用虚拟设备构成的 OSI 二层网络。使用以下命令对 OpenStack 的网络进行管理

openstack network <操作> [选项] [<网络名>]
操作 说明
create 创建网络
delete 删除网络
list 列出已有网络列表
set 设置网络参数
unset 取消网络参数设置
show 显示网络的详细信息
选项 说明
–h 显示帮助信息
–enable 启用网络
–disable 禁用网络
–enable-port-security 启用端口安全
–disable-port-security 禁用端口安全
–share 设置网络为共享网络
–no-share 设置网络为非共享网络
–external 设置网络为外部网络
–internal 设置网络为内部网络
–provider-network-type 网络类型(Flat、GRE、Local、VLAN、VXLAN)
–provider-physical-network 实现虚拟网络的物理网络的名称
# 案例1:创建一个“Flat” 类型的共享外部网络,网络名称为“vm-network”
[root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat vm-network

# 查看当前网络列表
[root@controller ~]# openstack network list
+--------------------------------------+------------+---------+
| ID                                   | Name       | Subnets |
+--------------------------------------+------------+---------+
| 3bfd3c40-f44c-42cd-81c8-9aa5cbd19bde | vm-network |         |
+--------------------------------------+------------+---------+
  
# 查看网络的详细信息
  openstack network show <Network ID or Name>
[root@controller ~]# openstack network show 3bfd3c40-f44c-42cd-81c8-9aa5cbd19bde
# 案例2:修改网络“vm-network”的名称为“vm-net”并将其更改为非共享网络
[root@controller ~]# openstack network set --name vm-net --no-share vm-network
[root@controller ~]# openstack network list
+--------------------------------------+--------+---------+
| ID                                   | Name   | Subnets |
+--------------------------------------+--------+---------+
| 3bfd3c40-f44c-42cd-81c8-9aa5cbd19bde | vm-net |         |
+--------------------------------------+--------+---------+
# 案例3:删除网络“vm-net”(网络中存在端口时,不能直接删除网络,需要先删除端口再删除网络)
[root@controller ~]# openstack network delete vm-net
[root@controller ~]# openstack network list

10.2 子网管理

​ 子网(Subnet)是挂载在网络中的一个IP段地址,它的主要功能是当网络中创建新的端口时为其分配IP地址。子网与网络是多对一的关系,一个子网必须属于一个网络,而一个网络中可以有多个子网。

openstack subnet <操作> [选项] <子网名>
操作 说明
create 创建新子网
delete 删除子网
list 列出已有子网列表
set 设置子网参数
unset 取消子网参数设置
show 显示子网详细信息
选项 说明
–h 显示帮助信息
–project 显示当前项目
–subnet-range 子网的IP段
–dhcp 启用DHCP为云主机自动分配IP地址
–no-dhcp 不使用DHCP
–allocation-pool,end=> DHCP分配的IP地址池(“start”表示起始地址,“end”表示结束地址)
–gateway 设置网关
–dns-nameserver 配置DNS服务器地址
–network 子网属于的网络
# 案例1:为网络“vm-network”创建一个名为“vm-subnetwork”的子网,该子网拥有“192.168.20.0/24”网段,并为云主机自动分配“192.168.20.100”到“192.168.20.200”之间的IP地址,同时设置DNS服务器IP地址为“114.114.114.114”
[root@controller ~]# openstack subnet create --network vm-network --dhcp --allocation-pool start=192.168.20.100,end=192.168.20.200 --dns-nameserver 114.114.114.114 --subnet-range 192.168.20.0/24 vm-subnetwork

[root@controller ~]# openstack subnet list
+--------------------------------------+---------------+--------------------------------------+-----------------+
| ID                                   | Name          | Network                              | Subnet          |
+--------------------------------------+---------------+--------------------------------------+-----------------+
| 8526ec5b-7a21-4e8e-9574-fc551285ba1b | vm-subnetwork | 4aa3fc20-9498-4d67-8428-1c772b3d5812 | 192.168.20.0/24 |
+--------------------------------------+---------------+--------------------------------------+-----------------+

[root@controller ~]# openstack subnet show vm-subnetwork
....
# 案例2:修改子网名称为“vm-subnet”并设定网关值为"192.168.20.2"
[root@controller ~]# openstack subnet set --name vm-subnet --gateway 192.168.20.2 vm-subnetwork
[root@controller ~]# openstack subnet list
+--------------------------------------+-----------+--------------------------------------+-----------------+
| ID                                   | Name      | Network                              | Subnet          |
+--------------------------------------+-----------+--------------------------------------+-----------------+
| 8526ec5b-7a21-4e8e-9574-fc551285ba1b | vm-subnet | 4aa3fc20-9498-4d67-8428-1c772b3d5812 | 192.168.20.0/24 |
+--------------------------------------+-----------+--------------------------------------+-----------------+
# 案例3:删除子网“vm-subnet”(子网中存在端口时,不能直接删除,需要先删除端口再删除子网)
[root@controller ~]# openstack subnet delete vm-subnet
[root@controller ~]# openstack subnet list

10.3 端口管理

​ 端口(Port)是挂载在子网中的用于连接云主机虚拟网卡的接口。端口上定义了硬件物理地址(即MAC地址)和独立的IP地址,当云主机的虚拟网卡连接到某个端口时,端口就会将MAC地址和IP地址分配给虚拟网卡。

​ 子网与端口是一对多关系,一个端口必须属于某个子网;一个子网可以有多个端口。

openstack port <操作> [选项] <端口名>
操作 说明
create 创建端口
delete 删除端口
list 列出已有端口列表
set 设置端口参数
unset 取消设置端口参数
show 显示端口的详细信息
选项 说明
–h 显示帮助信息
–network 端口属于的网络
–fixed-ip subnet=,ip-address= 为端口绑定IP地址(subnet为子网,ip-address为需要绑定的IP地址)
–enable 启用端口
–disable 禁用端口
–enable-port-security 启用端口安全设置
–disable-port-security 禁用端口安全设置
# 案例1:为网络“vm-network”的“vm-subnet”子网创建一个绑定了IP地址“192.168.20.120”的端口,并将其命名为“myprot”
# 【注】端口必须属于同一子网,若不存在子网,则需要先创建子网
[root@controller ~]# openstack port create --network vm-network --fixed-ip subnet=vm-subnet,ip-address=192.168.20.120 myport
[root@controller ~]# openstack port list
.....
# 案例2:删除“myport”端口
[root@controller ~]# openstack port delete myport

10.4 虚拟网桥管理

​ 网桥属于OSI参考模型的二层设备,类似于交换机,负责连接在它上面的云主机之间的通信。

brctl <操作> 
操作 说明
addbr 增加网桥
delbr 删除网桥
addif 将网卡接入网桥
delif 将网卡从网桥上删除
show [] 显示网桥信息
# 案例1:创建一个网桥“br1”
[root@controller ~]# brctl addbr br1
# 案例2:将(外网)网卡ens33接入网桥
[root@controller ~]# brctl addif br1 ens33
# 案例3:查看网桥信息
[root@controller ~]# brctl show br1
bridge name	  bridge id		      STP enabled	 interfaces
br1		      8000.000c296b54ee	  no		     ens33
# 只有当物理网卡ens33和云主机网络接口都连接到同一个网桥时才能实现云主机和物理机的直接通信

10.5 虚拟网络管理 - 项目实施

10.5.1 环境准备
  1. 卸载系统网络管理软件包

    CentOS 自带的 “NetworkManage” 网络管理软件包和 OpenStack 用到的虚拟网关服务有冲突,故需先卸载。

    [root@controller ~]# yum -y remove NetworkManager
    
    [root@compute ~]# yum -y remove NetworkManager
    
  2. 关闭VMware虚拟网络的DHCP服务

    ​ Neutron 提供了 DHCP 服务,且其 DHCP 服务器和 VMware Workstation 提供的 DHCP 服务器处于同一网段。两台 DHCP 服务器会使云主机获取不到 Neutron 分配的 IP 地址。

    OpenStack搭建_第13张图片

  3. 安装网桥管理工具包

    [root@controller ~]# yum install -y bridge-utils
    
10.5.2 用 Dashboard 创建与管理虚拟网络与子网
  1. 登录 Dashboard

    http://192.168.223.131/ (Dashboard安装地址)

    Default 域 admin 用户

  2. 创建虚拟网络

    【管理员】->【网络】->【网络】->【创建网络】

    OpenStack搭建_第14张图片

OpenStack搭建_第15张图片
OpenStack搭建_第16张图片

OpenStack搭建_第17张图片

10.5.3 用命令模式创建与管理虚拟网络与子网
  1. 查看虚拟网络与子网

    # 1.模拟登录
    [root@controller ~]# . admin-login
    
    # 2.查看现有虚拟网络列表
    [root@controller ~]# openstack network list
    +--------------------------------------+------+--------------------------------------+
    | ID                                   | Name | Subnets                              |
    +--------------------------------------+------+--------------------------------------+
    | f4bab2de-3d5d-43d6-9933-e96569a798fa | net1 | 742f01cd-7cd2-4fda-a8e8-ae4094d612b5 |
    +--------------------------------------+------+--------------------------------------+
    
    # 3.查看现有子网列表
    [root@controller ~]# openstack subnet list
    +--------------------------------------+---------+--------------------------------------+------------------+
    | ID                                   | Name    | Network                              | Subnet           |
    +--------------------------------------+---------+--------------------------------------+------------------+
    | 742f01cd-7cd2-4fda-a8e8-ae4094d612b5 | subnet1 | f4bab2de-3d5d-43d6-9933-e96569a798fa | 192.168.182.0/24 |
    +--------------------------------------+---------+--------------------------------------+------------------+
    
    # 4.查看现有网络接口列表
    [root@controller ~]# openstack port list
    
    
  2. 删除虚拟网络

    ​ 由于已经使用 Dashboard 创建了一个 Flat 虚拟网络,Flat类型的网络需要独占一块物理网卡,故不能创建第二个 Flat 虚拟网络。须先删除原有的虚拟网络和子网及端口。

  3. 创建虚拟网络及子网

    # 1.创建虚拟网络
    [root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat vm-network
    
    [root@controller ~]# openstack network list
    +--------------------------------------+------------+---------+
    | ID                                   | Name       | Subnets |
    +--------------------------------------+------------+---------+
    | e14e80ec-830f-4cf5-a165-c043e8578e7a | vm-network |         |
    +--------------------------------------+------------+---------+
    
    # 2.创建虚拟子网(Flat网络需要子网和外部网络处于同一网段)
    [root@controller ~]# openstack subnet create --network vm-network --allocation-pool start=192.168.182.140,end=192.168.182.240 --dns-nameserver 114.114.114.114 --gateway 192.168.182.2 --subnet-range 192.168.182.0/24 vm-subnetwork
    [root@controller ~]# openstack subnet list
    +--------------------------------------+---------------+--------------------------------------+------------------+
    | ID                                   | Name          | Network                              | Subnet           |
    +--------------------------------------+---------------+--------------------------------------+------------------+
    | d1197a98-a8b5-4322-beb6-0328dfeb5e98 | vm-subnetwork | e14e80ec-830f-4cf5-a165-c043e8578e7a | 192.168.182.0/24 |
    +--------------------------------------+---------------+--------------------------------------+------------------+
    
    
    # 3.重启网络
    [root@controller ~]# systemctl restart network
    
    [root@controller ~]# openstack network list
    +--------------------------------------+------------+--------------------------------------+
    | ID                                   | Name       | Subnets                              |
    +--------------------------------------+------------+--------------------------------------+
    | e14e80ec-830f-4cf5-a165-c043e8578e7a | vm-network | d1197a98-a8b5-4322-beb6-0328dfeb5e98 |
    +--------------------------------------+------------+--------------------------------------+
    
  4. 网桥管理

    # 1.查看网络情况
    [root@controller ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br1 state UP group default qlen 1000
        link/ether 00:0c:29:6b:54:ee brd ff:ff:ff:ff:ff:ff
        inet 192.168.182.136/24 brd 192.168.182.255 scope global noprefixroute ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::501a:f7f4:5fc0:dac0/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe6b:54ee/64 scope link 
           valid_lft forever preferred_lft forever
    3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:6b:54:f8 brd ff:ff:ff:ff:ff:ff
        inet 192.168.223.130/24 brd 192.168.223.255 scope global ens34
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe6b:54f8/64 scope link 
           valid_lft forever preferred_lft forever
    4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:8e:42:96 brd ff:ff:ff:ff:ff:ff
    5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
        link/ether 52:54:00:8e:42:96 brd ff:ff:ff:ff:ff:ff
    6: br1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 00:0c:29:6b:54:ee brd ff:ff:ff:ff:ff:ff
    
    # 查看网桥情况
    [root@controller ~]# brctl show
    bridge name	   bridge id			STP enabled		interfaces
    br1			  8000.000c296b54ee		no			    ens33
    virbr0		  8000.5254008e4296		yes		         virbr0-nic
    #【注】只有当云主机创建出来以后,计算节点上才会产生网桥
    

11. 实例类型管理

11.1 实例类型基本概念

​ 云主机也被称为实例,而实例类型(Flavor)类似于云主机的虚拟机硬件配置模板,该模板中定义了包括内存和硬盘大小、CPU个数等云主机信息,OpenStack 云计算平台依据此配置模板来批量生产云主机。

【注】OpenStack M版之前的云计算平台系统存在默认的实例类型,N版以后就没有默认的实例类型了,需要系统管理员自行定义。

实例类型 虚拟CPU / 个 硬盘 / GB 内存 / MB
m1.tiny 1 1 512
m1.small 1 20 2048
m1.medium 2 40 4096
m1.large 4 80 8192

11.2 实例类型管理

​ 实例类型只能由具有Admin权限的用户进行管理

openstack flavor <操作> [选项] <实例类型名>
操作 说明
create 创建新实例类型
delete 删除实例类型
list 列出已有的实例类型列表
show 显示实例类型的详细信息
选项 说明
–h 显示帮助信息
–id 设置实例类型的ID(默认值为auto)
–ram 设置内存大小,以MB为单位
–disk 设置硬盘大小,以GB为单位
–swap 设置交换分区大小,以MB为单位
–vcpus 设置虚拟CPU个数,默认为1
–public 公有的,默认允许实例类型被其他项目使用
–private 私有的,和公有的相反,该实例类型不允许被其他项目使用
# 案例1:创建一个名为“m1.tiny”的公有实例类型
[root@controller ~]# openstack flavor create --vcpus 1 --ram 512 --disk 1 --public m1.tiny
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 1                                    |
| id                         | 45ad1813-964d-4c8e-b1e1-6989e68ad445 |
| name                       | m1.tiny                              |
| os-flavor-access:is_public | True                                 |
| properties                 |                                      |
| ram                        | 512                                  |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 1                                    |
+----------------------------+--------------------------------------+
# 案例2:查看已有实例类型列表
[root@controller ~]# openstack flavor list
+--------------------------------------+---------+-----+------+-----------+-------+-----------+
| ID                                   | Name    | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------+-----+------+-----------+-------+-----------+
| 45ad1813-964d-4c8e-b1e1-6989e68ad445 | m1.tiny | 512 |    1 |         0 |     1 | True      |
+--------------------------------------+---------+-----+------+-----------+-------+-----------+
# 案例3:删除实例类型“m1.tiny”
[root@controller ~]# openstack flavor delete m1.tiny

11.3 用 Dashboard 创建与管理实例类型

  1. 登录 Dashboard

    http://192.168.223.131/ (Dashboard安装地址)

    Default 域 admin 用户

  2. 【管理员】->【计算】->【实例类型】->【创建实例类型】

    OpenStack搭建_第18张图片

    OpenStack搭建_第19张图片

    OpenStack搭建_第20张图片

    实际工作中,可由系统管理员预先创建多种实例类型来满足用户创建不通云主机的需要。

  3. 删除实例类型

    OpenStack搭建_第21张图片

    OpenStack搭建_第22张图片

11.4 用命令模式创建与管理实例类型

12. 云主机管理

12.1 云主机与快照管理

12.1.1 云主机管理
openstack server <操作> <云主机名> [选项]
操作 说明
create 创建云主机
delete 删除云主机
start 开启云主机
stop 关闭云主机
lock 锁定云主机
unlock 解锁云主机
pause 暂停云主机(将当前状态保存到内存中)
unpause 取消暂停云主机
reboot 重启云主机
rebuild 重建云主机
rescue 修复云主机
unrescue 取消修复云主机
resize 调整云主机规格
restore 还原云主机
suspend 挂起云主机,将当前状态保存到磁盘中
resume 取消挂起云主机
show 查看云主机详细信息
选项 说明
–h 显示帮助信息
–image 创建云主机时用到的镜像
–flavor 创建云主机时用到的实例类型
–volume 创建云主机时用到的卷
–snapshot 创建云主机时用到的快照
–security-group 创建云主机时用到的安全组
–host 指定某台服务器创建云主机
–network 云主机连接的网络
–port 云主机连接的端口



–nic
设置云主机的网络属性:
"net-id"为云主机连接的网络;
“v4-fixed-ip” 为云主机绑定的IPv4地址;
“v6-fixed-ip” 为云主机绑定的IPv6地址;
“port-id” 为云主机连接的端口;
“auto” 为自动连接网络;
“none” 为不连接网络
–key-name 将密钥对注入云主机
# 案例1:用“cirros”镜像和“m1.tiny”实例类型创建一台名为“VM_host”的云主机
[root@controller ~]# openstack server create VM_host --image cirros --flavor m1.tiny --network vm-network
+-------------------------------------+------------------------------------------------+
| Field                               | Value                                          |
+-------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                         |
| OS-EXT-AZ:availability_zone         |                                                |
| OS-EXT-SRV-ATTR:host                | None                                           |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                           |
| OS-EXT-SRV-ATTR:instance_name       |                                                |
| OS-EXT-STS:power_state              | NOSTATE                                        |
| OS-EXT-STS:task_state               | scheduling                                     |
| OS-EXT-STS:vm_state                 | building                                       |
| OS-SRV-USG:launched_at              | None                                           |
| OS-SRV-USG:terminated_at            | None                                           |
| accessIPv4                          |                                                |
| accessIPv6                          |                                                |
| addresses                           |                                                |
| adminPass                           | hEUbZNb6axzT                                   |
| config_drive                        |                                                |
| created                             | 2023-07-28T03:50:16Z                           |
| flavor                              | m1.tiny (45ad1813-964d-4c8e-b1e1-6989e68ad445) |
| hostId                              |                                                |
| id                                  | a83cf282-c439-440a-9219-ca2697a21334           |
| image                               | cirros (08a58e13-dab2-4378-87c4-24dc6bd99b75)  |
| key_name                            | None                                           |
| name                                | VM_host                                        |
| progress                            | 0                                              |
| project_id                          | 75697606e21045f188036410b6e5ac90               |
| properties                          |                                                |
| security_groups                     | name='default'                                 |
| status                              | BUILD                                          |
| updated                             | 2023-07-28T03:50:16Z                           |
| user_id                             | 157c2fe27dc54c8baa467a035274ec00               |
| volumes_attached                    |                                                |
+-------------------------------------+------------------------------------------------+
# 案例2:查看已存在的云主机列表
[root@controller ~]# openstack server list
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| ID                                   | Name    | Status | Networks                   | Image  | Flavor  |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| a83cf282-c439-440a-9219-ca2697a21334 | VM_host | ACTIVE | vm-network=192.168.182.152 | cirros | m1.tiny |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
# 案例3:重启云主机
在OenStack云计算平台中重启云主机有两种方式:
	软重启:正常关机并重启云主机   [root@controller ~]# openstack server reboot VM_host
 	硬重启:将云主机“断点”重启     [root@controller ~]# openstack server reboot VM_host --hard
# 案例4:暂停与挂起云主机
	暂停--是将云主机当前状态存入内存,并停用云主机。暂停后可以取消,将云主机恢复到暂停前的状态并启用。
	挂起--是将云主机当前状态保存到磁盘中,并停用云主机,挂起后可以取消挂起,将云主机恢复到挂起前的状态并启用。

## 4.1 暂停云主机 
[root@controller ~]# openstack server pause VM_host
[root@controller ~]# openstack server list
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| ID                                   | Name    | Status | Networks                   | Image  | Flavor  |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| a83cf282-c439-440a-9219-ca2697a21334 | VM_host | PAUSED | vm-network=192.168.182.152 | cirros | m1.tiny |
+--------------------------------------+---------+--------+----------------------------+--------+---------+

## 4.2 取消暂停云主机
[root@controller ~]# openstack server unpause VM_host
[root@controller ~]# openstack server list
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| ID                                   | Name    | Status | Networks                   | Image  | Flavor  |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| a83cf282-c439-440a-9219-ca2697a21334 | VM_host | ACTIVE | vm-network=192.168.182.152 | cirros | m1.tiny |
+--------------------------------------+---------+--------+----------------------------+--------+---------+

## 4.3 挂起云主机
[root@controller ~]# openstack server suspend VM_host
[root@controller ~]# openstack server list
+--------------------------------------+---------+-----------+----------------------------+--------+---------+
| ID                                   | Name    | Status    | Networks                   | Image  | Flavor  |
+--------------------------------------+---------+-----------+----------------------------+--------+---------+
| a83cf282-c439-440a-9219-ca2697a21334 | VM_host | SUSPENDED | vm-network=192.168.182.152 | cirros | m1.tiny |
+--------------------------------------+---------+-----------+----------------------------+--------+---------+

## 4.4 取消挂起云主机
[root@controller ~]# openstack server resume VM_host
[root@controller ~]# openstack server list
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| ID                                   | Name    | Status | Networks                   | Image  | Flavor  |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| a83cf282-c439-440a-9219-ca2697a21334 | VM_host | ACTIVE | vm-network=192.168.182.152 | cirros | m1.tiny |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
# 案例5:关闭与开启云主机
[root@controller ~]# openstack server stop VM_host
[root@controller ~]# openstack server list
+--------------------------------------+---------+---------+----------------------------+--------+---------+
| ID                                   | Name    | Status  | Networks                   | Image  | Flavor  |
+--------------------------------------+---------+---------+----------------------------+--------+---------+
| a83cf282-c439-440a-9219-ca2697a21334 | VM_host | SHUTOFF | vm-network=192.168.182.152 | cirros | m1.tiny |
+--------------------------------------+---------+---------+----------------------------+--------+---------+

[root@controller ~]# openstack server start VM_host
[root@controller ~]# openstack server list
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| ID                                   | Name    | Status | Networks                   | Image  | Flavor  |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
| a83cf282-c439-440a-9219-ca2697a21334 | VM_host | ACTIVE | vm-network=192.168.182.152 | cirros | m1.tiny |
+--------------------------------------+---------+--------+----------------------------+--------+---------+
# 案例6:重建云主机
若已存在的云主机出现了故障,可以通过重建操作还原云主机(类似于恢复出厂设置)
[root@controller ~]# openstack server rebuild VM_host
+-------------------+----------------------------------------------------------+
| Field             | Value                                                    |
+-------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL                                                   |
| accessIPv4        |                                                          |
| accessIPv6        |                                                          |
| addresses         | vm-network=192.168.182.152                               |
| adminPass         | zAffeSGB9Q2f                                             |
| created           | 2023-07-28T03:50:16Z                                     |
| flavor            | m1.tiny (45ad1813-964d-4c8e-b1e1-6989e68ad445)           |
| hostId            | 1b91c4ec8f561bad59f7fdaad4b108d604b68d6ddba5bb75fb4fa10d |
| id                | a83cf282-c439-440a-9219-ca2697a21334                     |
| image             | cirros (08a58e13-dab2-4378-87c4-24dc6bd99b75)            |
| name              | VM_host                                                  |
| progress          | 0                                                        |
| project_id        | 75697606e21045f188036410b6e5ac90                         |
| properties        |                                                          |
| status            | REBUILD                                                  |
| updated           | 2023-07-28T09:01:53Z                                     |
| user_id           | 157c2fe27dc54c8baa467a035274ec00                         |
+-------------------+----------------------------------------------------------+
# 案例7:删除云主机
[root@controller ~]# openstack server delete VM_host
12.1.3 快照管理

​ 通过拍摄快照操作可以获得一个镜像,该镜像可以用于还原云主机或创建新的云主机。

# 语法:openstack server image <快照名> [选项]
# 为云主机“VM_host”拍摄快照,生成“vmSnapshot”镜像
[root@controller ~]# openstack server image create VM_host --name vmSnapshot

# 生成的镜像可以通过Glance进行管理
[root@controller ~]# openstack image list
+--------------------------------------+------------+--------+
| ID                                   | Name       | Status |
+--------------------------------------+------------+--------+
| 08a58e13-dab2-4378-87c4-24dc6bd99b75 | cirros     | active |
| 8a9ac056-4bc0-41bf-8657-3d1735ec5120 | vmSnapshot | active |
+--------------------------------------+------------+--------+

12.2 云主机控制台

12.2.1 Dashboard 的云主机控制台
  1. 进入控制台并登录云主机

    登录 Dashboard 后,进入【实例】界面。侧导航中选择【项目】->【计算】->【实例】

    OpenStack搭建_第23张图片

    **选择要登录的实例:**单机要管理的云主机的【实例名称】,进入【实例概况】界面

    OpenStack搭建_第24张图片

    **进入控制台:**在【实例概况】界面选择【控制台】选项卡进入【实例控制台】,为操作方便也可单击【点击此处只显示控制台】链接

    OpenStack搭建_第25张图片

    OpenStack搭建_第26张图片

    登录云主机:云主机启动完成后,使用 CirrOS 用户 “cirros” 及密码 “gocubsgo” 登录系统

    OpenStack搭建_第27张图片

    OpenStack搭建_第28张图片

  2. virsh 云主机管理工具

    virsh 是由 Libvirt 软件包提供的管理工具,它提供了对云主机的一系列管理功能。

    # 案例1:查看已启动的云主机列表
    
    
    # 案例2:用virsh连接ID为“1”的云主机
    
    

12.3 用 Dashboard 创建与管理云主机

12.4 用命令模式创建与管理云主机

13. 用云镜象部署云主机

13.1 密钥对

​ 登录 Linux 云主机有两种方式,一种是通过用户名和密码,另一种是通过密钥对(Key Pair)。由 CentOS 官方提供的 CentOS 云镜像生成的云主机默认通过密钥对方式进行登录。

​ 密钥对是指通过 SSH 加密算法所产生的一对密钥,它包括应该公钥和一个私钥。其中,私钥是由密钥对所有持有者持有且保密的,而公钥是公开的。公钥用于给数据加密,私钥用来给数据解密,用公钥加密的数据只能用私钥解密。

​ 要使用 SSH 密钥登录 Linux 实例,必须先创建一个密钥对,并在部署云主机时候指定公钥或者创建云主机后绑定公钥,用户才能使用私钥免密登录云主机。

相较于用户名密码登录,密钥对有以下优势:

  	1. SSH 密钥对登录认证更为安全可靠。
  	2. 密钥对安全强度高,基本上可以避免暴力破解威胁。
  	3. 不可能通过公钥推导出私钥。
  	4. 更加便捷,可以免密码登录系统(便于维护多台 Linux 云主机)。
# 案例:生成密钥对 
[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): 
# 回车后将在/root/.ssh/目录下生成一个密钥对(私钥:id_rsa  公钥:id_rsa.pub)
[root@controller ~]# ls /root/.ssh/
id_rsa  id_rsa.pub
# 案例:将公钥导入到OpenStack云计算平台
# 将生成的公钥id_rsa.pub导入到OpenStack云计算平台并命名为“ssh_key”
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub ssh_key
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | fe:4c:75:a1:fa:0b:ed:ee:42:9f:ce:21:36:6f:4b:21 |
| name        | ssh_key                                         |
| user_id     | 157c2fe27dc54c8baa467a035274ec00                |
+-------------+-------------------------------------------------+

# 查看系统密钥对列表
[root@controller ~]# openstack keypair list
+---------+-------------------------------------------------+
| Name    | Fingerprint                                     |
+---------+-------------------------------------------------+
| ssh_key | fe:4c:75:a1:fa:0b:ed:ee:42:9f:ce:21:36:6f:4b:21 |
+---------+-------------------------------------------------+

13.2 云主机初始化配置工具

​ cloud-init 是在多种 Linux 云镜像中默认安装的一种用于在云环境中对云主机进行初始化的工具,它可以从各个数据源读取相关数据并据此对虚拟机进行设置。这些数据源可以是数据库中的原数据或者是用户自定义的脚本数据。当 cloud-init 获取这些信息后,在系统启动之初就使用其内置功能模块自动完成相应的功能,如新建用户、启动脚本等。

将用户定义的脚本传入云主机有3中方法:

  1. 在Dashboard 中创建云主机时,在【配置】选项卡中的【定制化脚本】文本框中输入。
  2. 在Dashboard 中创建云主机时,选择【配置】选项卡中的【选择文件】按钮上传脚本文件。
  3. 在命令模式使用openstack server create创建云主机时,使用选项-user-data <脚本文件>将脚本文件传入。
#【注】cloud-init 只会读取以“#cloud-config”开头的数据
# 案例脚本1:为新部署的云主机配置域名解析+设置主机名

#cloud-config
bootcmd:
	- echo 192.168.223.130 controller
	- hostnamectl set-hostname myhost
# 案例脚本2:为新部署的云主机增加一个YUM源

#cloud-config
yum_repos:
  yumname:
    baseurl:http://repo.huaweicloud.com/centos/7/os/x86_64/
    enable:true
    gpgcheck:false
    name:centos
# 案例脚本3:允许SSH远程连接以密码登录,并设置root用户的密码

#cloud-config
ssh_pwauth:True
chpasswd:
  list:|
     root:000000
  expire:False

13.3 环境准备

13.3.1 下载系统云镜像
  1. 安装 “wget” 下载工具

    [root@controller ~]# yum install wget
    
  2. 从CentOS官网下载一个“qcow2”格式的云镜像文件

    [root@controller ~]# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-2009.qcow2
    
    [root@controller ~]# ls
    CentOS-7-x86_64-GenericCloud-2009.qcow2
    
13.3.2 检查时间同步服务
  1. 检查两个节点的当前时间

    [root@controller ~]# date
    2023年 07月 31日 星期一 14:35:27 CST
    
    [root@compute ~]# date
    2023年 07月 31日 星期一 14:35:30 CST
    
  2. 若两个节点时间相差过大则需要手动同步时间

13.3.3 检查现有云主机

​ 因节点配置有限,为确保新的 CentOS 云主机能够有足够的资源提供运行,需要检查现有云主机状态,并将其关闭。

[root@controller ~]# openstack server list
+--------------------------------------+---------+--------+----------------------------+-------+---------+
| ID                                   | Name    | Status | Networks                   | Image | Flavor  |
+--------------------------------------+---------+--------+----------------------------+-------+---------+
| 22ad0c49-ca4d-40f6-9013-7f1e96c922a4 | VM_host | ACTIVE | vm-network=192.168.182.179 | cirros| m1.tiny |
+--------------------------------------+---------+--------+----------------------------+-------+---------+

# 执行关闭云主机
[root@controller ~]# openstack server stop VM_host
[root@controller ~]# openstack server list
+--------------------------------------+---------+---------+----------------------------+-------+---------+
| ID                                   | Name    | Status  | Networks                   | Image | Flavor  |
+--------------------------------------+---------+---------+----------------------------+-------+---------+
| 22ad0c49-ca4d-40f6-9013-7f1e96c922a4 | VM_host | SHUTOFF | vm-network=192.168.182.179 | cirros| m1.tiny |
+--------------------------------------+---------+---------+----------------------------+-------+---------+

13.4 部署 CentOS 云主机

13.4.1 创建镜像
  1. 登录 Dashboard 后,在侧导航中选择【项目】->【计算】->【镜像】,进入【images】界面

    OpenStack搭建_第29张图片

  2. 创建镜像

    OpenStack搭建_第30张图片

    OpenStack搭建_第31张图片

13.4.2 配置安全组

​ 安全组设置了云计算平台如何与外部网络交互的一些规则。在其默认设置中,不能从外网ping同云主机的,也不能让 SSH 远程根据访问到云主机,现对其进行配置。

  1. 进入【安全组】界面:侧导航中选择【项目】->【网络】->【安全组】选项

    OpenStack搭建_第32张图片

  2. 管理默认安全组规则

    OpenStack搭建_第33张图片

    OpenStack搭建_第34张图片

    OpenStack搭建_第35张图片

    OpenStack搭建_第36张图片

13.4.3 创建密钥对
  1. 进入【密钥对】界面:侧导航中选择【项目】->【计算】->【密钥对】选项

    OpenStack搭建_第37张图片

  2. 创建密钥对

    OpenStack搭建_第38张图片

  3. 保存私钥文件

    OpenStack搭建_第39张图片

13.4.4 创建云主机
  1. 进入【实例】界面:侧导航中选择【项目】->【计算】->【实例】选项

    OpenStack搭建_第40张图片

    OpenStack搭建_第41张图片

    OpenStack搭建_第42张图片

    OpenStack搭建_第43张图片
    OpenStack搭建_第44张图片
    OpenStack搭建_第45张图片

    OpenStack搭建_第46张图片

    OpenStack搭建_第47张图片

    #cloud-config			# 脚本声明,表示这是系统启动时候自动加载的脚本
    ssh_pwauth:True			# 配置是否允许SSH协议使用密码登录
    password:000000			# 默认登录用户的密码(CentOS7云镜像的默认登录用户是“centos”)
    chpasswd:			    # 更改root用户密码
      list:|
         root:000000
      expire:False			# 设置密码有效期:永不过期
    

    OpenStack搭建_第48张图片

13.4.5 检测与管理云主机
  1. 检测云主机的连通性

    # 1.检测云计算平台到云主机的连通性
    [root@controller ~]# openstack server list 
    +--------------------------------------+---------+--------+----------------------------+-------+--------+
    | ID                                   | Name    | Status | Networks                   | Image | Flavor |
    +--------------------------------------+---------+--------+----------------------------+-------+--------+
    | d95e11ec-c59c-434e-a4b6-76a19a5dd010 | server1 | ACTIVE | vm-network=192.168.182.149 |       | mini   |
    +--------------------------------------+---------+--------+----------------------------+-------+--------+
    
    [root@controller ~]# ping 192.168.182.149 -c 4
    PING 192.168.182.149 (192.168.182.149) 56(84) bytes of data.
    64 bytes from 192.168.182.149: icmp_seq=1 ttl=64 time=2.32 ms
    64 bytes from 192.168.182.149: icmp_seq=2 ttl=64 time=0.888 ms
    64 bytes from 192.168.182.149: icmp_seq=3 ttl=64 time=6.09 ms
    64 bytes from 192.168.182.149: icmp_seq=4 ttl=64 time=1.06 ms
    
    --- 192.168.182.149 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3004ms
    rtt min/avg/max/mdev = 0.888/2.593/6.094/2.096 ms
    
    # 2.检测外网到云主机的连通性
    C:\Users\Administrator>ping 192.168.182.149
    
    Pinging 192.168.182.149 with 32 bytes of data:
    Reply from 192.168.182.149: bytes=32 time=2ms TTL=64
    Reply from 192.168.182.149: bytes=32 time=1ms TTL=64
    Reply from 192.168.182.149: bytes=32 time<1ms TTL=64
    Reply from 192.168.182.149: bytes=32 time=1ms TTL=64
    
    Ping statistics for 192.168.182.149:
        Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    Approximate round trip times in milli-seconds:
        Minimum = 0ms, Maximum = 2ms, Average = 1ms
    
  2. 远程管理云主机

    OpenStack搭建_第49张图片

    OpenStack搭建_第50张图片

    OpenStack搭建_第51张图片

你可能感兴趣的:(OpenStack,openstack,chrome,前端)