hostname | system | host resource | IP |
---|---|---|---|
controller | centos7 | 4G内存、4核 | 192.168.100.10 10.10.128.10 |
compute | centos7 | 2G内存、2核 | 192.168.100.20 10.10.128.20 |
本次实验管理网络192.168.100.0/24
能够连接互联网
provider网络10.10.128.0/24
[root@localhost ~]# hostnamectl set-hostname controller
[root@controller ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
[root@localhost ~]# hostnamectl set-hostname compute
[root@compute ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
controller节点访问互联网测试
[root@controller ~]# ping -c 4 g.cn
PING g.cn (203.208.40.79) 56(84) bytes of data.
64 bytes from 203.208.40.79: icmp_seq=1 ttl=128 time=40.1 ms
64 bytes from 203.208.40.79: icmp_seq=2 ttl=128 time=38.5 ms
64 bytes from 203.208.40.79: icmp_seq=3 ttl=128 time=37.7 ms
64 bytes from 203.208.40.79: icmp_seq=4 ttl=128 time=34.9 ms
--- g.cn ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 16259ms
rtt min/avg/max/mdev = 34.985/37.850/40.110/1.867 ms
controller节点与compute节点通信测试
[root@controller ~]# ping -c 4 compute
PING compute (192.168.100.20) 56(84) bytes of data.
64 bytes from compute (192.168.100.20): icmp_seq=1 ttl=64 time=0.601 ms
64 bytes from compute (192.168.100.20): icmp_seq=2 ttl=64 time=0.270 ms
64 bytes from compute (192.168.100.20): icmp_seq=3 ttl=64 time=0.330 ms
64 bytes from compute (192.168.100.20): icmp_seq=4 ttl=64 time=0.302 ms
--- compute ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.270/0.375/0.601/0.133 ms
compute节点访问互联网测试
[root@compute ~]# ping -c 4 g.cn
PING g.cn (203.208.40.79) 56(84) bytes of data.
64 bytes from 203.208.40.79: icmp_seq=1 ttl=128 time=35.6 ms
64 bytes from 203.208.40.79: icmp_seq=2 ttl=128 time=36.2 ms
64 bytes from 203.208.40.79: icmp_seq=3 ttl=128 time=38.7 ms
64 bytes from 203.208.40.79: icmp_seq=4 ttl=128 time=40.1 ms
--- g.cn ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 35.671/37.697/40.113/1.832 ms
compute节点与controller节点通信测试
[root@compute ~]# ping -c 4 controller
PING controller (192.168.100.10) 56(84) bytes of data.
64 bytes from controller (192.168.100.10): icmp_seq=1 ttl=64 time=0.429 ms
64 bytes from controller (192.168.100.10): icmp_seq=2 ttl=64 time=0.293 ms
64 bytes from controller (192.168.100.10): icmp_seq=3 ttl=64 time=0.307 ms
64 bytes from controller (192.168.100.10): icmp_seq=4 ttl=64 time=0.223 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.223/0.313/0.429/0.074 ms
controller节点
要在节点之间正确同步服务,可以安装Chrony,这是NTP的实现。2个节点都同步阿里ntp服务器time1.aliyun.com
[root@controller ~]# yum install chrony -y
编辑/etc/chrony.conf
文件,删除:
server NTP_SERVER iburst
实际为
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
添加:
server time1.aliyun.com iburst
启动服务并添加开机自启:
[root@controller ~]# mkdir /var/run/chrony
[root@controller ~]# systemctl status chronyd
[root@controller ~]# systemctl enable chronyd
compute节点
[root@compute ~]# yum install chrony -y
删除或注释:
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
添加:
server time1.aliyun.com iburst
启动服务并添加开机自启:
[root@compute ~]# mkdir /var/run/chrony
[root@compute ~]# systemctl start chronyd.service
[root@compute ~]# systemctl enable chronyd.service
controller节点与compute
现在openstack最新版是ussuri
,要求centos是8。我的是centos7做之前没注意,所以本次安装版本为train
版
[root@controller ~]# yum install centos-release-openstack-train -y
[root@controller ~]# yum upgrade -y
安装一个适宜的openstack 客户端
[root@controller ~]# yum install python-openstackclient -y
由于centos默认开启了selinux,安装openstack-selinux
包自动管理openstack 服务安全策略
[root@controller ~]# yum install openstack-selinux -y
[root@compute ~]# yum install openstack-selinux -y
大多数OpenStack服务都使用SQL数据库来存储信息。数据库一般运行在controller节点上,本次使用mariadb。
[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL -y
创建/etc/my.cnf.d/openstack.cnf
文件,并添加如下内容
[root@controller ~]# touch /etc/my.cnf.d/openstack.cnf
[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.100.10 #此地址为controller节点ip
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
启动并设置开机自启
[root@controller ~]# systemctl start mariadb
[root@controller ~]# systemctl enable mariadb
执行mysql_secure_installation
安全脚本。
[root@controller ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none): #直接回车
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y #是否设置root密码 y
New password: #输入密码
Re-enter new password: #再次确认
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y #移除匿名用户 y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] n #拒绝root远程登陆 n
... skipping.
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y #移除测试数据库(数据库发行前开发测试用的没啥用)
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y #重新加载权限表 y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
OpenStack使用消息队列来协调服务之间的操作和状态信息。 消息队列服务通常在controller点上运行。 本次使用RabbitMQ消息队列
安装配置
[root@controller ~]# yum install rabbitmq-server -y
[root@controller ~]# systemctl start rabbitmq-server
[root@controller ~]# systemctl enable rabbitmq-server
000000
[root@controller ~]# rabbitmqctl add_user openstack 000000
Creating user "openstack"
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"
服务的身份服务身份验证机制使用Memcached来缓存令牌。 memcached服务通常在controller节点上运行。
[root@controller ~]# yum install memcached python-memcached -y
/etc/sysconfig/memcached
添加如下内容OPTIONS="-l 127.0.0.1,::1,controller"
[root@controller ~]# systemctl start memcached
[root@controller ~]# systemctl enable memcached
OpenStack Identity Service提供了一个集成点,用于管理身份验证,授权和服务目录。
身份服务通常是用户与之交互的第一项服务。身份验证后,最终用户可以使用其身份访问其他OpenStack服务。同样,其他OpenStack服务也利用身份服务来确保用户是他们所说的人,并发现其他服务在部署中的位置。身份服务还可以与某些外部用户管理系统(例如LDAP)集成。
用户和服务可以使用由身份服务管理的服务目录来查找其他服务。服务目录是OpenStack部署中可用服务的集合。每个服务可以具有一个或多个端点(endpoints
),并且每个端点可以是以下三种类型之一:admin
,internal
或public
。在生产环境中,出于安全原因,不同的endpoints
类型可能驻留在暴露给不同类型用户的单独网络上。
例如:公共API网络可能在Internet上可见,因此客户可以管理它们自己的cloud。 admin API网络可能仅限于管理云基础架构的组织内的运营商。内部API网络可能仅限于包含OpenStack服务的主机。此外,OpenStack支持多个区域以实现可伸缩性。为简单起见,本次将管理网络用于所有端点类型和默认的RegionOne区域。身份服务中创建的区域,服务和端点共同构成了部署的服务目录。部署中的每个OpenStack服务都需要一个服务条目,并在Identity服务中存储相应的端点。
keystone服务包含以下组件:
Server:
使用RESTful服务接口提供集中式的身份验证和授权服务。
Drivers:
Drivers或服务后端集成到集中式Server。用于访问OpenStack外部存储库中的身份信息,并且可能已经存在于部署OpenStack的基础架构中。
Modules:
中间件Modules在使用身份服务的OpenStack组件的地址空间中运行。 这些模块拦截服务请求,提取用户凭证,并将其发送到集中式服务器进行授权。 中间件模块和OpenStack组件之间的集成使用Python Web服务器网关接口。
[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 17
Server version: 10.3.20-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database keystone;
Query OK, 1 row affected (0.001 sec)
GRANT ALL PRIVILEGES ON [数据库名].[表名] to '[用户名]@localhost/%' IDENTIFIED BY '[密码]';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.005 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.000 sec)
[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
编辑/etc/keystone/keystone.conf
在[database]部分添加如下内容:
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[KEYSTON_DBPASS]修改为数据库中设置的密码
connection = mysql+pymysql://keystone:000000@controller/keystone
在[token]部分添加如下内容
provider = fernet
然后执行如下命令
su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化Fernet密钥仓库:
–keystone-user和–keystone-group标志用于指定将用于运行keystone的操作系统的用户/组。这些参数是为了允许在另一个操作系统用户/组下运行keystone。
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 \
> --bootstrap-admin-url http://controller:5000/v3/ \
> --bootstrap-internal-url http://controller:5000/v3/ \
> --bootstrap-public-url http://controller:5000/v3/ \
> --bootstrap-region-id RegionOne
/etc/httpd/conf/httpd.conf
文件的ServerName 为controllerServerName controller
/usr/share/keystone/wsgi-keystone.conf
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@controller ~]# systemctl start httpd
[root@controller ~]# systemctl enable httpd
[root@controller ~]# vi openstack-admin.sh
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
必要时通过source openstack-admin.sh 来生效配置
[root@controller ~]# openstack domain create --description "An Example Domain" example
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | An Example Domain |
| enabled | True |
| id | e7a4ef4e82d54dd48d42c4e21373613f |
| name | example |
| options | {} |
| tags | [] |
+-------------+----------------------------------+
[root@controller ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 16ca1d55269d4f16a79662611bd70df3 |
| is_domain | False |
| name | service |
| options | {} |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
[root@controller ~]# openstack project create --domain default --description "Demo Project" myproject
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 51f50c3a2a68454a8f2122f90bdad89d |
| is_domain | False |
| name | myproject |
| options | {} |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
创建myuser的用户
[root@controller ~]# openstack user create --domain default --password-prompt myuser
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 15042e2377d24be2bd831a03842aa775 |
| name | myuser |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
创建myrole的角色
[root@controller ~]# openstack role create myrole
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | None |
| domain_id | None |
| id | 52fd6fcf572d4534a36ea4a640d6e6ea |
| name | myrole |
| options | {} |
+-------------+----------------------------------+
将myrole角色添加到myproject项目和myuser用户:
[root@controller ~]# openstack role add --project myproject --user myuser myrole
myuser用户请求认证token
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name myproject --os-username myuser token issue
Password:
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-06-28T09:54:31+0000 |
| id | gAAAAABe-FrHunkJlroXcSVjq1zrJ1JCu4oDAGzr7JutjmMgYg3CcUp2kyu-MCyebTu48i0E0ZRSHDLjAOhR7buPHfmlhXjsxgadRZoM_OBhFBUEw1dAaSYterixSDqYGOY2bGf8ovhHapJ4rc3QetifjhzUEd1fOW_pVBfS_qwOYS53f9BLgdE |
| project_id | 51f50c3a2a68454a8f2122f90bdad89d |
| user_id | 15042e2377d24be2bd831a03842aa775 |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
前面的部分使用了环境变量和命令选项的组合,以通过openstack客户端与Identity Service进行交互。 为了提高客户端操作的效率,OpenStack支持简单的客户端环境脚本,也称为OpenRC文件。 这些脚本通常包含所有客户端的通用选项,也支持唯一选项。
创建admin-openrc
[root@controller ~]# vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
创建demo-openrc
[root@controller ~]# vi demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
要将客户端作为特定项目和用户运行,可以在运行它们之前简单地加载关联的客户端环境脚本
加载admin-openrc文件以使用Identity服务的位置以及admin项目和用户凭据填充环境变量
[root@controller ~]# . admin-openrc
请求认证token
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2020-06-28T10:00:02+0000 |
| id | gAAAAABe-FwSX8Yi3y4NsHe0B5CujrMvR5L0Ff7oPolybfVouJsSJIvZGiJ1e4Qo2E4jYAVQ0RRoZGh_0yPtQrENnNv-FUwYJVTbDoRwEtp_i6MJ4J4ZDf9GMKkfy4TbB7Jv8FIswiFk0l0NvKPz0YMqp2yZWarQu58qtQ-QELUFl9c_IIl3qGU |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| user_id | d0451d8b9a7245c09d47b85197ccc80c |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Keystone在一个或多个endpoints上公开的一组内部服务。前端将这些服务结合使用。例如,一个身份验证将使用Identity Service验证用户/项目凭据,并在成功后使用令牌服务创建并返回令牌。
Identity service提供身份验证凭证以及有关用户和组的数据。一般情况下,这些数据由Identity service管理,进行相关的CRUD(增删改查)操作。在复杂的情况下,这些数据也可以由后端进行管理。例如,Identity service作为LDAP的前端。LDAP服务器进行认证,Identity service的作用是准确地传递该信息。
User代表单个API使用者。 User本身必须由特定domain拥有,因此所有用户名不是全局唯一的,而仅是其domain唯一的。
Groups使User的集合。Group本身必须由特定domain拥有,因此所有group名称不是全局唯一的,而仅是其domain唯一的。
Resource Service提供了有关project和domain的数据。
Project代表OpenStack所有权的基本单位,因为OpenStack中的所有Resource均应由特定Project拥有。 一个Project本身必须由一个特定的domain拥有,因此所有的Project名称都不是全局唯一的,而是对其domain唯一的。 如果未指定Project的domain,则将其添加到default domain。
domain包含Projects、Users和groups,每一个domain是唯一的。每个domain都定义一个存在API可见名称属性的名称空间。 Keystone提供了一个默认domain,名为“Default”。
在Identity v3 API中,属性的唯一性如下:
domain名。 在所有域中都是唯一的。
role名称。 在所属域中唯一。
user名。 在所属域中唯一。
Project名称。 在所属域中唯一。
group名字。 在所属域中唯一。
Assignment service提供了有关user和role的分配
Role决定了最终user可以获得的授权级别。 可以在domain或Project级别授予Role。 可以在单个User或group级别分配Role。 Role名称在所属domain中是唯一的。
具有Role、 Resource 和Identity的三元组.
一旦验证了用户的证书,Token服务便会验证和管理用于认证请求的token。
Catalog service 用于提供通过endponit发现的endpoint注册表
Keystone是一些服务的HTTP前端。 像其他OpenStack应用程序一样,使用python WSGI接口完成,并且使用Paste一起配置了应用程序。 该应用程序的HTTP端点由WSGI中间件的管道组成,例如:
[pipeline:api_v3]
pipeline = healthcheck cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id build_auth_context json_body ec2_extension_v3 s3_extension service_v3
这些依次使用keystone.common.wsgi.ComposedRouter的子类将URL链接到控制器(keystone.common.wsgi.Application的子类)。 在每个控制器中,将加载一个或多个管理器,这些管理器是精简包装类,它们根据keystone配置加载适当的服务驱动程序。
keystone.assignment.controllers.GrantAssignmentV3
keystone.assignment.controllers.ImpliedRolesV3
keystone.assignment.controllers.ProjectAssignmentV3
keystone.assignment.controllers.TenantAssignment
keystone.assignment.controllers.RoleAssignmentV3
keystone.assignment.controllers.RoleV3
keystone.auth.controllers.Auth
keystone.catalog.controllers.EndpointFilterV3Controller
keystone.catalog.controllers.EndpointGroupV3Controller
keystone.catalog.controllers.EndpointV3
keystone.catalog.controllers.ProjectEndpointGroupV3Controller
keystone.catalog.controllers.RegionV3
keystone.catalog.controllers.ServiceV3
keystone.contrib.ec2.controllers.Ec2ControllerV3
keystone.credential.controllers.CredentialV3
keystone.federation.controllers.IdentityProvider
keystone.federation.controllers.FederationProtocol
keystone.federation.controllers.MappingController
keystone.federation.controllers.Auth
keystone.federation.controllers.DomainV3
keystone.federation.controllers.ProjectAssignmentV3
keystone.federation.controllers.ServiceProvider
keystone.federation.controllers.SAMLMetadataV3
keystone.identity.controllers.GroupV3
keystone.identity.controllers.UserV3
keystone.oauth1.controllers.ConsumerCrudV3
keystone.oauth1.controllers.AccessTokenCrudV3
keystone.oauth1.controllers.AccessTokenRolesV3
keystone.oauth1.controllers.OAuthControllerV3
keystone.policy.controllers.PolicyV3
keystone.resource.controllers.DomainV3
keystone.resource.controllers.DomainConfigV3
keystone.resource.controllers.ProjectV3
keystone.resource.controllers.ProjectTagV3
keystone.revoke.controllers.RevokeController
keystone.trust.controllers.TrustV3
可以将每个服务配置为使用endpoint,以使keystone适应各种环境和需求。 每个服务的后端在keystone.conf文件中定义,并且密钥驱动程序位于与每个服务关联的组下。
每个endpoint下都有一个通用类,可为任何实现提供抽象基类,以标识预期的服务实现。 抽象基类以base.py的形式存储在服务的后端目录中。 服务的相应驱动程序是:
keystone.assignment.backends.base.AssignmentDriverBase
keystone.assignment.role_backends.base.RoleDriverBase
keystone.auth.plugins.base.AuthMethodHandler
keystone.catalog.backends.base.CatalogDriverBase
keystone.credential.backends.base.CredentialDriverBase
keystone.endpoint_policy.backends.base.EndpointPolicyDriverBase
keystone.federation.backends.base.FederationDriverBase
keystone.identity.backends.base.IdentityDriverBase
keystone.identity.mapping_backends.base.MappingDriverBase
keystone.identity.shadow_backends.base.ShadowUsersDriverBase
keystone.oauth1.backends.base.Oauth1DriverBase
keystone.policy.backends.base.PolicyDriverBase
keystone.resource.backends.base.ResourceDriverBase
keystone.resource.config_backends.base.DomainConfigDriverBase
keystone.revoke.backends.base.RevokeDriverBase
keystone.token.providers.base.Provider
keystone.trust.backends.base.TrustDriverBase
模板化backend主要是针对Keystone项目中服务目录的常见用例而设计的,它是目录后端,它可以简单地扩展预配置的模板以提供目录数据。
例如paste.deploy配置
[DEFAULT]
catalog.RegionOne.identity.publicURL = http://localhost:$(public_port)s/v3
catalog.RegionOne.identity.adminURL = http://localhost:$(public_port)s/v3
catalog.RegionOne.identity.internalURL = http://localhost:$(public_port)s/v3
catalog.RegionOne.identity.name = 'Identity Service'
Keystone的设计,可适应多种风格的后端。 因此,许多方法和数据类型将接受更多的数据,而不是知道如何处理并将这些数据传递给后端。
有几种主要的数据类型:
映像服务使用户能够发现,注册和检索虚拟机映像。 它提供了REST API,可以查询虚拟机映像元数据并检索实际映像。 可以通过Image服务提供的虚拟机映像存储在从简单文件系统到对象存储系统(如OpenStack Object Storage)的各种位置。
为简单起见,将Image服务配置为使用文件后端的方法,该文件后端会将其上传并存储在托管Image服务的控制器节点上的目录中。 默认情况下,此目录为
/var/lib/glance /images/
。
OpenStack 镜像服务对于基础架构即服务(IaaS)至关重要。 它接受来自磁盘或服务器镜像的API请求,以及来自最终用户或OpenStack Compute组件的元数据定义。 它还支持在各种存储库类型(包括OpenStack对象存储)上存储磁盘或服务器镜像。
OpenStack镜像服务包括以下组件:
[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 8
Server version: 10.3.20-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database glance;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.000 sec)
创建glance服务凭证,执行下述步骤
创建glance用户
取得管理员凭据,以访问仅管理员的CLI命令:
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 12af032add7740a1a5e469d2604d5aa9 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
添加admin角色到glance用户和service project中。
[root@controller ~]# openstack role add --project service --user glance admin
创建glance service
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 3eb979627b50476faab627aa302b84e1 |
| name | glance |
| type | image |
+-------------+----------------------------------+
创建image 服务端点
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | afd67248eafa4b64beadcc6b214b8100 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3eb979627b50476faab627aa302b84e1 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 4577afb1371042d887522bee377c65ae |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3eb979627b50476faab627aa302b84e1 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1c25397e44cf4329a5d483e0bba37884 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3eb979627b50476faab627aa302b84e1 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
安装软件包
[root@controller ~]# yum install openstack-glance -y
编辑/etc/glance/glance-api.conf
文件
[database]部分添加如下配置,注意:此处GLANCE_DBPASS为数据库设置的密码
[database]
# ...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]和[paste_deploy]部分添加如下内容,注意:此处GLANCE_PASS为openstack创建glance用户时设置的密码
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
[glance_store]部分配置image本地文件存储系统和文件
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
同步数据库
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
[root@controller ~]# systemctl start openstack-glance-api
[root@controller ~]# systemctl enable openstack-glance-api
使用CirrOS(一个小型Linux映像,可帮助您测试OpenStack部署)验证Image Service的运行。
[root@controller ~]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
使用QCOW2磁盘格式,bare container格式和公共可见性将镜像上传到镜像服务,以便所有项目都可以访问它:
[root@controller ~]# glance image-create --name "cirros" \
> --file cirros-0.4.0-x86_64-disk.img \
> --disk-format qcow2 --container-format bare \
> --visibility public
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2020-06-29T05:57:59Z |
| disk_format | qcow2 |
| id | 42b7a330-85ad-462a-9282-e6b09576b806 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| os_hash_algo | sha512 |
| os_hash_value | 6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e |
| | 2161b5b5186106570c17a9e58b64dd39390617cd5a350f78 |
| os_hidden | False |
| owner | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| protected | False |
| size | 12716032 |
| status | active |
| tags | [] |
| updated_at | 2020-06-29T05:57:59Z |
| virtual_size | Not available |
| visibility | public |
+------------------+----------------------------------------------------------------------------------+
确认上传的镜像并验证属性
[root@controller ~]# glance image-list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 42b7a330-85ad-462a-9282-e6b09576b806 | cirros |
+--------------------------------------+--------+
placement API服务是在nova中的14.0.0 Newton版本中引入的。是一个REST API堆栈和数据模型,用于跟踪资源提供者的清单和使用情况以及不同类别的资源。
例如,资源提供者可以是计算节点,共享存储池或IP分配池。placement服务跟踪每个提供商的库存和使用情况。
每个资源提供者也可以具有描述该资源提供者的质量方面的一组特征。特征描述了资源提供者的一个方面,该方面本身不能被消耗,但可能希望指定工作量。例如,可用磁盘可以是固态驱动器(SSD)。
placement提供了一个placement API的WSGI脚本,用于与Apache,nginx或其他支持WSGI的Web服务器一起运行服务。
创建placement数据库
[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 22
Server version: 10.3.20-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database placement;
Query OK, 1 row affected (0.001 sec)
授予placement用户对placement数据库本地和远程的所有操作权限。同样这里PLACEMENT_DBPASS
替换为我们要设置的密码。
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
创建用户
[root@controller ~]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | ba0d6f59cc174f3680eb3107b43d9071 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------
授予placement对service project admin的角色
[root@controller ~]# openstack role add --project service --user placement admin
在服务目录创建placement API 条目
[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | dbbf0e2a6df74af4aef3b0cf2141a1f0 |
| name | placement |
| type | placement |
+-------------+----------------------------------+
创建placement service endpoint
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2f11c7900de84494bdd9eee6d0e82057 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dbbf0e2a6df74af4aef3b0cf2141a1f0 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a7c01895fd24477886f3932d190d4bdb |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dbbf0e2a6df74af4aef3b0cf2141a1f0 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 32a63557a5cb47e08fd07fd426d111f1 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dbbf0e2a6df74af4aef3b0cf2141a1f0 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
安装软件包
[root@controller ~]# yum install openstack-placement-api -y
编辑/etc/placement/placement.conf
文件添加如下内容。
在添加之前可以对配置做适当整理
[root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
[root@controller ~]# grep -Ev '(^#|^$)' !$ > /etc/placement/placement.conf
grep -Ev '(^#|^$)' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
[root@controller ~]# cat /etc/placement/placement.conf
[DEFAULT]
[api]
[cors]
[keystone_authtoken]
[oslo_policy]
[placement]
[placement_database]
[profiler]
PLACEMENT_DBPASS
为数据库设置的密码```
[placement_database]
# ...
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
```
[api]和[keystone_authtoken]部分,此处PLACEMENT_PASS为创建openstack placement用户时设置的密码
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
填充placement数据库
[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement
systemctl restart httpd
执行状态检查以确保一切正常:
[root@controller ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
OpenStack项目是一个开源云计算平台,支持所有类型的云环境。该项目旨在实现简单的实现,大规模的可伸缩性和丰富的功能。
OpenStack通过各种补充服务提供了基础架构即服务(IaaS)解决方案。每个服务都提供一个促进此集成的应用程序编程接口(API)。
OpenStack Compute是基础架构即服务(IaaS)系统的主要部分。 主要模块是用Python实现的。
OpenStack Compute与OpenStack Identity进行交互以进行身份验证,与OpenStack Placement进行资源清单跟踪和选择,为磁盘和服务器映像提供OpenStack Image服务,并为用户和管理界面提供OpenStack Dashboard。
OpenStack Compute由以下部分组成:
nova-api service:接受并响应最终用户的计算API调用。 该服务支持OpenStack Compute API。 它执行一些策略并启动大多数编排活动,例如运行实例。
nova-api-metadata service:接受来自实例的元数据请求。 当具有nova-network安装的多主机模式下运行时,通常使用nova-api-metadata服务。
nova-compute service:通过守护程序API创建和终止虚拟机实例的辅助程序守护程序。 例如:
适用于XenServer / XCP的XenAPI
用于KVM或QEMU的libvirt
适用于VMware的VMwareAPI
nova-scheduler service:从队列中获取虚拟机实例请求,并确定它在哪台计算服务器主机上运行。
nova-conductor module:充当nova-compute服务与数据库之间交互中间人的身份。 它消除了由nova-compute服务对云数据库的直接访问。请勿将其部署在运行nova-compute服务的节点上。
nova-novncproxy daemon:提供用于通过VNC连接访问正在运行的实例的代理。 支持基于浏览器的novnc客户端。
nova-spicehtml5proxy daemon:提供用于通过SPICE连接访问正在运行的实例的代理。 支持基于浏览器的HTML5客户端。
nova-xvpvncproxy daemon:提供用于通过VNC连接访问正在运行的实例的代理。 支持特定于OpenStack的Java客户端。从19.0.0(Stein)起不推荐使用nova-xvpvnxproxy,并且在以后的发行版中将其删除。
The queue:在程序之间传递消息的中心。 通常用RabbitMQ实现。
SQL database:存储云基础架构的大多数构建时和运行时状态。
创建nova_api、nova、nova_cell0数据库
[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 35
Server version: 10.3.20-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.000 sec)
授予nova用户对3个数据库的所有权限 ,此处NOVA_DBPASS
为我们自定义的密码。
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
创建计算服务的证书
创建nova用户
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 09196e3534444e38af2483d34a0642da |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
给nova用户添加admin角色
[root@controller ~]# openstack role add --project service --user nova admin
创建nova服务实体
[root@controller ~]# openstack service create --name nova \
> --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | ed362f51b0c54979897bc1e98e5e14e9 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
创建compute API服务端点
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 29f76fc4025949698a03e63469c5c998 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ed362f51b0c54979897bc1e98e5e14e9 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e5f175955c094b968f3ab27f3e38f484 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ed362f51b0c54979897bc1e98e5e14e9 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b76c32a37e26482aa8841becdb3df4f1 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ed362f51b0c54979897bc1e98e5e14e9 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
安装软件包
[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
编辑/etc/nova/nova.conf
文件完成如下配置:
整理下配置文件
[root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
[root@controller ~]# grep -Ev '(^#|^$)' !$ > /etc/nova/nova.conf
grep -Ev '(^#|^$)' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
在[DEFAULT] 部分, 仅启用compute和metadata APIs:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
在 [api_database] 和 [database] 部分,添加数据库会话连接,此处NOVA_DBPASS
为数据库中的授权密码
[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
在 [DEFAULT] 部分,配置RabbitMQ消息队列,此处RABBIT_PASS
为openstack账号的密码
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
在 [api] 和 [keystone_authtoken] 部分。配置身份访问,此处NOVA_PASS
为nova用户的密码
```
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
```
在 [DEFAULT] 部分,配置my_ip为controller节点管理IP接口地址
[DEFAULT]
# ...
my_ip = 192.168.100.10
[DEFAULT] 部分,开启支持网络服务。
```
[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
```
> 默认情况下,Compute使用内部防火墙驱动程序。 由于网络服务包含防火墙驱动程序,因此必须使用nova.virt.firewall.NoopFirewallDriver防火墙驱动程序禁用计算防火墙驱动程序。
在**[vnc]部分,**使用controller节点管理IP接口地址配置VNC代理
[vnc]
enabled = true
# ...
server_listen = 192.168.100.10
server_proxyclient_address = 192.168.100.10
在 [glance] 部分,配置镜像服务API
[glance]
# ...
api_servers = http://controller:9292
在 [oslo_concurrency] ,配置锁路径
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
在 [placement] 部分,配置访问placement服务,PLACEMENT_PASS
为placement密码。
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
填充nova-api数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1单元
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
填充nova数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
验证 cell0 和 cell1 注册情况
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False |
| cell1 | ac36e8e1-b1e5-4534-b991-40577e68d273 | rabbit://openstack:****@controller:5672/ | mysql+pymysql://nova:****@controller/nova | False |
+-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
[root@controller ~]# systemctl start \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
[root@controller ~]# systemctl enable \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
安装配置如下内容
安装软件包
[root@compute ~]# yum install openstack-nova-compute -y
编辑**/etc/nova/nova.conf**文件,完成如下配置
[root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
[root@compute ~]# grep -Ev '(^#|^$)' !$ > /etc/nova/nova.conf
grep -Ev '(^#|^$)' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
[DEFAULT] 部分,开启compute 和metadata APIs,配置消息队列
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
[DEFAULT] 部分,配置RabbitMQ消息队列,RABBIT_PASS
为RabbitMQ中openstack账户的密码
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
在 [api] 和 [keystone_authtoken] 配置身份认证,NOVA_PASS
为nova用户的密码
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
在 [DEFAULT] 部分,配置 my_ip 选项。MANAGEMENT_INTERFACE_IP_ADDRESS
为compute节点管理接口地址
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
在 [DEFAULT] 部分,开启网络支持
[DEFAULT]
# ...
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
在 [vnc] 部分,开启远程控制台访问配置,server_listen监听所有IP,server_proxyclient_address仅监听compute节点管理地址IP。URL可以使用Web浏览器访问此计算节点上的实例的远程控制台的位置。
[vnc]
# ...
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
在 [glance] 部分,配置镜像服务API的位置
[glance]
# ...
api_servers = http://controller:9292
在 **[oslo_concurrency]**部分,配置锁的路径
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
在 [placement] 部分,配置placement API,PLACEMENT_PASS为placement密码
[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
0
如果此命令返回的值等于或大于1,则计算节点支持硬件加速,通常不需要其他配置。
如果此命令返回零值,则说明计算节点不支持硬件加速,并且必须将libvirt配置为使用QEMU而不是KVM。
编辑 /etc/nova/nova.conf 文件,添加如下配置
[libvirt]
# ...
virt_type = qemu
[root@compute ~]# systemctl start libvirtd openstack-nova-compute
[root@compute ~]# systemctl enable libvirtd openstack-nova-compute
可以观察日志,/var/log/nova/nova-compute.log,很有可能连接失败,由于防火墙的问题。这里先将2个节点防火强暂时管理
[root@controller ~]# systemctl stop firewalld
[root@controller ~]# systemctl disable firewalld
[root@compute ~]# systemctl stop firewalld
[root@compute ~]# systemctl disable firewalld
认证compute主机是否在数据库
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+---------+------+---------+-------+----------------------------+
| 9 | nova-compute | compute | nova | enabled | up | 2020-06-29T09:54:06.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+
发现compute主机
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': ac36e8e1-b1e5-4534-b991-40577e68d273
Checking host mapping for compute host 'compute': 42b8cf63-a002-43ba-9bcb-7a99cbaae6e2
Creating host mapping for compute host 'compute': 42b8cf63-a002-43ba-9bcb-7a99cbaae6e2
Found 1 unmapped computes in cell: ac36e8e1-b1e5-4534-b991-40577e68d273
添加新的计算节点时,必须在controller节点上运行nova-manage cell_v2 discover_hosts
才能注册这些新的计算节点。或者在/etc/nova/nova.conf
中设置适当的时间间隔:
[scheduler]
discover_hosts_in_cells_interval = 300
列出服务组件以验证每个进程成功启动和注册:
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+----------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2020-06-29T09:57:20.000000 |
| 3 | nova-scheduler | controller | internal | enabled | up | 2020-06-29T09:57:20.000000 |
| 9 | nova-compute | compute | nova | enabled | up | 2020-06-29T09:57:26.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+
列出身份服务中的API端点以验证与身份服务的连接性:
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | |
| placement | placement | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | |
| nova | compute | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | |
+-----------+-----------+-----------------------------------------+
列出镜像以验证镜像服务的连接性:
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 42b7a330-85ad-462a-9282-e6b09576b806 | cirros | active |
+--------------------------------------+--------+--------+
检查cells和placement API是否正常运行
[root@controller ~]# nova-status upgrade check
Error:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 398, in main
ret = fn(*fn_args, **fn_kwargs)
File "/usr/lib/python2.7/site-packages/oslo_upgradecheck/upgradecheck.py", line 102, in check
result = func(self)
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 164, in _check_placement
versions = self._placement_get("/")
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 154, in _placement_get
return client.get(path, raise_exc=True).json()
File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 386, in get
return self.request(url, 'GET', **kwargs)
File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 248, in request
return self.session.request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 943, in request
raise exceptions.from_response(resp, method, url)
Forbidden: Forbidden (HTTP 403)
这可能是软件包的bug,解决如下:
[root@controller ~]# vi /etc/httpd/conf.d/00-nova-placement-api.conf
= 2.4>
Require all granted
Order allow,deny
Allow from all
[root@controller ~]# systemctl restart httpd
[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+--------------------------------+
OpenStack Networking(neutron)允许您创建由其他OpenStack服务管理的接口设备并将其连接到网络。 可以实施插件来容纳不同的网络设备和软件,从而为OpenStack体系结构和部署提供灵活性。它包含如下组件:
neutron-server: 接受API请求并将其路由到适当的OpenStack网络插件
OpenStack Networking plug-ins and agents: 创建网络或子网,并提供IP寻址。 这些插件和代理因特定云中使用的供应商和技术而异。 OpenStack Networking附带了用于思科虚拟和物理交换机,NEC OpenFlow产品,Open vSwitch,Linux bridge和VMware NSX产品的插件和代理。
Messaging queue: 大多数OpenStack Networking安装都使用它在neutron-server和各种代理之间路由信息。 还用作存储特定插件的网络状态的数据库。
将controller节点第二块网卡进行如下修改,INTERFACE_NAME为网卡名,HWADDR和UUID不修改
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
重启系统,应用更改的内容
创建数据库
[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 43
Server version: 10.3.20-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.001 sec)
授予neutron用户对neutron数据库的权限,NEUTRON_DBPASS自己设置
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
创建neutron用户
[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 3e609d3fc2764ca6a4ac82bce9ad84a4 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
添加admin角色到neutron用户
[root@controller ~]# openstack role add --project service --user neutron admin
创建neutron服务实体
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 5d05e3a3ab0e4e96b93df61df38ae0ca |
| name | neutron |
| type | network |
+-------------+----------------------------------+
创建网络服务API 端点
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 20793dc0b7f74db18e642a6f5998bd55 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5d05e3a3ab0e4e96b93df61df38ae0ca |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1d416c6ef2264a649d20ad9e9286eb73 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5d05e3a3ab0e4e96b93df61df38ae0ca |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 07d33104a9ee47228c4c9108d3addd99 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5d05e3a3ab0e4e96b93df61df38ae0ca |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
有两种选择
Option 1:部署仅支持将实例附加到外部网络的最简单的体系结构。 没有专用网络,路由器或浮动IP地址。 只有管理员或其他特权用户可以管理提供商网络。
Option 2:在1的基础上支持将实例附加到专用网络的第3层服务,用户可以管理专用网络。
专用网络通常使用覆盖网络。诸如VXLAN之类的覆盖网络协议包括其他标头,这些标头增加了开销并减少了可用于有效负载或用户数据的空间。
以Option2为例:
安装组件
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
配置服务组件
编辑/etc/neutron/neutron.conf
文件,完成如下配置
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@controller ~]# grep -Ev '(^#|^$)' !$ > /etc/neutron/neutron.conf
在 [database] 部分, NEUTRON_DBPASS 数据库密码
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
在 [DEFAULT] 部分,启用模块化第2层(ML2)插件,由器服务和重叠的IP地址:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
在 [DEFAULT] 部分,配置消息队列连接,RABBIT_PASS为openstack用户密码
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
在 [DEFAULT] 和 [keystone_authtoken] 部分,配置服务访问认证,NEUTRON_PASS为neutron用户密码
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
在 [DEFAULT] 和 [nova] 部分,将网络配置为通知Compute网络拓扑更改,NOVA_PASS为nova用户的密码
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
在 [oslo_concurrency] 部分,配置锁路径
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
编辑 /etc/neutron/plugins/ml2/ml2_conf.ini
文件
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@controller ~]# grep -Ev '(^#|^$)' !$ > /etc/neutron/plugins/ml2/ml2_conf.ini
在 [ml2] 部分,开启vlan、vxlan和flat网络
[ml2]
# ...
type_drivers = flat,vlan,vxlan
在 [ml2] 部分,开启vxlan私有网络
[ml2]
# ...
tenant_network_types = vxlan
在 [ml2] 部分,启用Linux桥接器
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
在 [ml2] 部分,启用端口安全扩展驱动程序
[ml2]
# ...
extension_drivers = port_security
在 [ml2_type_flat] 部分,配置provider 虚拟网络作为flat网络
[ml2_type_flat]
# ...
flat_networks = provider
在 [ml2_type_vxlan] 部分,配置私有网络的VXLAN网络标识符范围
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
在 [securitygroup] 部分,启用ipset以提高安全组规则的效率
[securitygroup]
# ...
enable_ipset = true
Linux bridge 代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组。
编辑 /etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件完成如下配置
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@controller ~]# grep -Ev '(^#|^$)' !$ > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
在 [linux_bridge] 部分,将提供者虚拟网络映射到提供者物理网络接口:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
PROVIDER_INTERFACE_NAME为内部网卡接口
在 [vxlan] 部分,启用VXLAN覆盖网络,配置处理覆盖网络的物理网络接口的IP地址,并启用第2层填充:
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
OVERLAY_INTERFACE_IP_ADDRESS将OVERLAY_INTERFACE_IP_ADDRESS替换为处理覆盖网络的基础物理网络接口的IP地址。 示例体系结构使用管理接口将流量隧道传输到其他节点。 因此,将OVERLAY_INTERFACE_IP_ADDRESS替换为控制器节点的管理IP地址。
在 [securitygroup] 部分,启用安全组并配置Linux网桥iptables防火墙驱动程序:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
验证以下所有sysctl值是否设置为1,确保Linux操作系统内核支持网桥过滤器
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
Layer-3(L3)代理提供了专有网络的NAT和路由服务。
编辑/etc/neutron/l3_agent.ini
文件,完成如下配置
[root@controller ~]# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
[root@controller ~]# grep -Ev '(^#|^$)' !$ > /etc/neutron/l3_agent.ini
在 [DEFAULT] 部分,配置linux bridge接口驱动
[DEFAULT]
# ...
interface_driver = linuxbridge
DHCP代理提供了虚拟网络的DHCP服务
编辑 /etc/neutron/dhcp_agent.ini 文件,完成如下配置
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
[root@controller ~]# grep -Ev '(^$|^#)' !$ > /etc/neutron/dhcp_agent.ini
在 [DEFAULT] 部分,配置linux bridge驱动,Dnsmasq DHCP 驱动,开启元数据隔离,以便提供商网络上的实例可以通过网络访问元数据:
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
编辑 /etc/neutron/metadata_agent.ini 完成如下配置
[root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
[root@controller ~]# grep -Ev '(^#|^$)' !$ > /etc/neutron/metadata_agent.ini
在 [DEFAULT] 部分,添加如下内容
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
METADATA_SECRET替换为适宜的密码
编辑/etc/nova/nova.conf
完成如下配置
在 [neutron] 部分,添加如下内容
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
NEUTRON_PASS替换为neutron用户认证服务的密码
METADATA_SECRET替换为metadata proxy选择的密码
网络服务初始化脚本需要一个指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
的符号链接/etc/neutron/plugin.ini
。 如果此符号链接不存在,使用以下命令创建它:
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
[root@controller ~]# systemctl restart openstack-nova-api
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent
将compute节点第二块网卡进行如下修改,INTERFACE_NAME为网卡名,HWADDR和UUID不修改
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
重启系统,应用更改的内容
[root@compute ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
编辑 /etc/neutron/neutron.conf 文件,完成如下配置
[root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@compute ~]# grep -Ev '(^#|^$)' !$ > /etc/neutron/neutron.conf
在 [DEFAULT] 部分,配置RabbitMQ消息队列访问
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
RABBIT_PASS替换为RabbitMQ中openstack账户的密码
在 [DEFAULT] and [keystone_authtoken] 部分,配置认证服务访问
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
NEUTRON_PASS 替换为neutron用户在认证服务的密码
在 [oslo_concurrency] 部分,配置锁路径
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
配置linux bridge 代理
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini
文件
[root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@compute ~]# grep -Ev '(^#|^$)' !$ > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
在 [linux_bridge] 部分,完成如下配置
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
PROVIDER_INTERFACE_NAME替换为提供者物理网络接口。
在 [vxlan] 部分,开启vxlan overlay网络,配置用于处理覆盖网络的物理网络接口的IP地址
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
将OVERLAY_INTERFACE_IP_ADDRESS替换为计算节点的管理IP地址。
在[securitygroup]部分,配置linux bridge 驱动并开启安全组
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
验证以下所有sysctl值是否设置为1,确保Linux操作系统内核支持网桥过滤器
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
编辑 /etc/nova/nova.conf 文件,完成如下配置
在**[neutron]**部分,配置访问参数
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
替换NEUTRON_PASS为身份认证服务neutron用户的密码
[root@compute ~]# systemctl restart openstack-nova-compute
[root@compute ~]# systemctl enable neutron-linuxbridge-agent
[root@compute ~]# systemctl restart neutron-linuxbridge-agent
[root@controller ~]# openstack extension list --network | more
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name | Alias | Description |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Subnet Pool Prefix Operations | subnetpool-prefix-ops | Provides support for adjusting the prefix list of subnet pools |
| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. |
| Availability Zone | availability_zone | The availability zone extension. |
| Network Availability Zone | network_availability_zone | Availability zone support for network. |
| Subnet Onboard | subnet_onboard | Provides support for onboarding subnets into subnet pools |
| Auto Allocated Topology Services | auto-allocated-topology | Auto Allocated Topology Services. |
| Neutron L3 Configurable external gateway mode | ext-gw-mode | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway |
| Port Binding | binding | Expose port bindings of a virtual port to external application |
| agent | agent | The agent management extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |
| Neutron external network | external-net | Adds external network attribute to network resource. |
| Empty String Filtering Extension | empty-string-filtering | Allow filtering by attributes with empty string value |
| Tag support for resources with standard attribute: subnet, trunk, network_segment_range, router, network, policy, subnetpool, port, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. |
| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. |
| Network MTU | net-mtu | Provides MTU attribute for a network resource. |
| Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. |
| Quota management support | quotas | Expose functions for quotas management per tenant |
| If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. |
| Prevent L3 router ports IP address change extension | l3-port-ip-change-not-allowed | Prevent change of IP address for some L3 router ports |
| Availability Zone Filter Extension | availability_zone_filter | Add filter parameters to AvailabilityZone resource |
| HA Router extension | l3-ha | Adds HA capability to routers. |
| Enforce Router's Admin State Down Before Update Extension | router-admin-state-down-before-update | Ensure that the admin state of a router is down (admin_state_up=False) before updating the distributed attribute |
| Filter parameters validation | filter-validation | Provides validation on filter parameters. |
| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks |
| Quota details management support | quota_details | Expose functions for quotas usage statistics per project |
| Address scope | address-scope | Address scopes extension. |
| Neutron Extra Route | extraroute | Extra routes configuration for L3 router |
| Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. |
| Agent's Resource View Synced to Placement | agent-resources-synced | Stores success/failure of last sync to Placement |
| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |
| Floating IP Pools Extension | floatingip-pools | Provides a floating IP pools API. |
| Neutron Port MAC address regenerate | port-mac-address-regenerate | Network port MAC address regenerate |
| Add security_group type to network RBAC | rbac-security-groups | Add security_group type to network RBAC |
| Provider Network | provider | Expose mapping of virtual networks to physical networks |
| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services |
| Router Flavor Extension | l3-flavors | Flavor support for routers. |
| Port Security | port-security | Provides port security |
| Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) |
| Port filtering on security groups | port-security-groups-filtering | Provides security groups filtering when listing ports |
| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. |
| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. |
| Pagination support | pagination | Extension that indicates that pagination is enabled. |
| Sorting support | sorting | Extension that indicates that sorting is enabled. |
| security-group | security-group | The security groups extension. |
| L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among l3 agents |
| Floating IP Port Details Extension | fip-port-details | Add port_details attribute to Floating IP resource |
| Router Availability Zone | router_availability_zone | Availability zone support for router. |
| RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. |
| Atomically add/remove extra routes | extraroute-atomic | Edit extra routes of a router on server side by atomically adding/removing extra routes |
| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |
| IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports |
| Neutron L3 Router | router | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway. |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| Port Bindings Extended | binding-extended | Expose port bindings of a virtual port to external application |
| project_id field enabled | project-id | Extension that indicates that project_id field is enabled. |
| Distributed Virtual Router | dvr | Enables configuration of Distributed Virtual Routers. |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
输出应指示控制器节点上的四个代理,每个计算节点上的一个代理
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 3ba49d5b-b613-469c-8d40-c9bb4106ec00 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| 8f5a7876-60ec-4fa4-b475-54f03837de1e | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| b3ad2e41-1d8d-4c9a-92d1-3e543d0e36ee | L3 agent | controller | nova | :-) | UP | neutron-l3-agent |
| d3d6708b-6226-43d4-9469-25412d184a7a | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent |
| f84b4b15-ed90-484f-9a70-e55caa5b117d | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
[root@controller ~]# yum install openstack-dashboard -y
编辑 /etc/openstack-dashboard/local_settings 文件,完成如下配置
[root@controller ~]# cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak
[root@controller ~]# grep -Ev '(^#|^$|^*.#)' !$ > /etc/openstack-dashboard/local_settings
配置dashboard在openstack controller节点
OPENSTACK_HOST = "controller"
配置memcached会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
开启v3认证API
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
开启domain支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
将"Default"配置为通过dashboard创建的用户的默认域:
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
将"user"配置为通过dashboard创建的用户的默认角色:
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
如果未包含以下行,则将其添加到/etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
对/etc/httpd/conf.d/openstack-dashboard.conf
文件进行如下更改
#WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
#Alias /dashboard/static /usr/share/openstack-dashboard/static
Alias /static /usr/share/openstack-dashboard/static
[root@controller ~]# systemctl restart httpd memcached
通过http://controller_IP/
用户名为admin 或demon,域为default。
本次实验环境配置最初的时候定义为
Management physical network:192.168.100.0/24
Provider physical network:10.10.128.0/24
self-service virtual network:172.16.1.0/24
[root@controller ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2020-06-30T05:33:28Z |
| description | |
| dns_domain | None |
| id | b66bf1a4-1f9b-40ea-9dfe-0a72fdc40685 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='86fc1bb169a443f98fdaf8e2fb25f9cd', project.name='admin', region_name='', zone= |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2020-06-30T05:33:28Z |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]# openstack subnet create --network provider --allocation-pool start=10.10.128.100,end=10.10.128.200 --dns-nameserver 8.8.4.4 --gateway 10.10.128.254 --subnet-range 10.10.128.0/24 provider
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools | 10.10.128.100-10.10.128.200 |
| cidr | 10.10.128.0/24 |
| created_at | 2020-06-30T05:36:25Z |
| description | |
| dns_nameservers | 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 10.10.128.254 |
| host_routes | |
| id | bcf07ef2-5b80-4bd4-8fd0-ad7b92b78673 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='86fc1bb169a443f98fdaf8e2fb25f9cd', project.name='admin', region_name='', zone= |
| name | provider |
| network_id | b66bf1a4-1f9b-40ea-9dfe-0a72fdc40685 |
| prefix_length | None |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2020-06-30T05:36:25Z |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------
root@controller ~]# openstack network create selfservice
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2020-06-30T05:25:44Z |
| description | |
| dns_domain | None |
| id | 00e77183-d7d9-435c-9ad8-3a29c08c1b1d |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='86fc1bb169a443f98fdaf8e2fb25f9cd', project.name='admin', region_name='', zone= |
| mtu | 1450 |
| name | selfservice |
| port_security_enabled | True |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 1 |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2020-06-30T05:25:44Z |
+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]# openstack subnet create --network selfservice --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 --subnet-range 172.16.1.0/24 selfservice
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools | 172.16.1.2-172.16.1.254 |
| cidr | 172.16.1.0/24 |
| created_at | 2020-06-30T05:28:38Z |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 172.16.1.1 |
| host_routes | |
| id | 4bc8c51e-7962-4cd5-8322-a71d1f369806 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='86fc1bb169a443f98fdaf8e2fb25f9cd', project.name='admin', region_name='', zone= |
| name | selfservice |
| network_id | 00e77183-d7d9-435c-9ad8-3a29c08c1b1d |
| prefix_length | None |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2020-06-30T05:28:38Z |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
创建路由
[root@controller ~]# openstack router create router
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2020-06-30T05:37:12Z |
| description | |
| distributed | False |
| external_gateway_info | null |
| flavor_id | None |
| ha | False |
| id | a64e13d3-4a1e-4f83-bcb0-2d6e476f8d83 |
| location | cloud='', project.domain_id=, project.domain_name='Default', project.id='86fc1bb169a443f98fdaf8e2fb25f9cd', project.name='admin', region_name='', zone= |
| name | router |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| revision_number | 1 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2020-06-30T05:37:12Z |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]# openstack router add subnet router selfservice
[root@controller ~]# openstack router set router --external-gateway provider
列出网络名称空间,可以看到qdhcp和qrouter两个
[root@controller ~]# ip netns
qrouter-a64e13d3-4a1e-4f83-bcb0-2d6e476f8d83 (id: 2)
qdhcp-b66bf1a4-1f9b-40ea-9dfe-0a72fdc40685 (id: 1)
qdhcp-00e77183-d7d9-435c-9ad8-3a29c08c1b1d (id: 0)
列出router上的端口,以确定provider网络上的网关IP地址:
[root@controller ~]# openstack port list --router router
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| 276a7ea1-f013-4d20-9e7c-d0ac757954c2 | | fa:16:3e:32:73:c0 | ip_address='172.16.1.1', subnet_id='4bc8c51e-7962-4cd5-8322-a71d1f369806' | ACTIVE |
| dc707682-495c-4de9-a6c8-ea5fa498d6e7 | | fa:16:3e:4b:a6:11 | ip_address='10.10.128.152', subnet_id='bcf07ef2-5b80-4bd4-8fd0-ad7b92b78673' | ACTIVE |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
ping以下provider网关地址
[root@controller ~]# ping 10.10.128.152
PING 10.10.128.152 (10.10.128.152) 56(84) bytes of data.
64 bytes from 10.10.128.152: icmp_seq=1 ttl=128 time=10.8 ms
64 bytes from 10.10.128.152: icmp_seq=2 ttl=128 time=1.04 ms
^X^C
--- 10.10.128.152 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.043/5.936/10.830/4.894 ms
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
先删除所有规则
入口出口都放行
选择镜像——>然后下一步
选择实例类型
选择selfservice网络,点击创建实例
给实例绑定浮动IP
然后就可以通过远程连接工具连接这个浮动IP进行控制。这个浮动IP其实就相当于我们购买云主机时给的一个公网IP。选择的cirros镜像。默认用户名:cirros
密码:gocubsgo
查看实例类型
[root@controller ~]# openstack flavor list
+----+---------+-----+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+-----+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
查看可用的镜像
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 42b7a330-85ad-462a-9282-e6b09576b806 | cirros | active |
+--------------------------------------+--------+--------+
查看可用的网络
[root@controller ~]# openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------+--------------------------------------+
| 00e77183-d7d9-435c-9ad8-3a29c08c1b1d | selfservice | 4bc8c51e-7962-4cd5-8322-a71d1f369806 |
| b66bf1a4-1f9b-40ea-9dfe-0a72fdc40685 | provider | bcf07ef2-5b80-4bd4-8fd0-ad7b92b78673 |
+--------------------------------------+-------------+--------------------------------------+
列出安全组
[root@controller ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| f0d7b4f6-1aa3-49c2-a47b-7c57f4183d48 | default | Default security group | 86fc1bb169a443f98fdaf8e2fb25f9cd | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
启动实例,--nic net-id=
是selfservice网络的ID
[root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=00e77183-d7d9-435c-9ad8-3a29c08c1b1d --security-group f0d7b4f6-1aa3-49c2-a47b-7c57f4183d48 selfservice-instance
+-------------------------------------+-----------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | CnTVkqc67WzA |
| config_drive | |
| created | 2020-06-30T06:10:51Z |
| flavor | m1.nano (0) |
| hostId | |
| id | fc8355a8-8035-4079-9023-8b79ced74490 |
| image | cirros (42b7a330-85ad-462a-9282-e6b09576b806) |
| key_name | None |
| name | selfservice-instance |
| progress | 0 |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| properties | |
| security_groups | name='f0d7b4f6-1aa3-49c2-a47b-7c57f4183d48' |
| status | BUILD |
| updated | 2020-06-30T06:10:51Z |
| user_id | d0451d8b9a7245c09d47b85197ccc80c |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------+
查看server列表,test是刚才dashboard创建的,相对比可以发现selfservice-instance实例少了一个IP,缺少的就是浮动IP
[root@controller ~]# openstack server list
+--------------------------------------+----------------------+--------+-----------------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+----------------------+--------+-----------------------------------------+--------+---------+
| fc8355a8-8035-4079-9023-8b79ced74490 | selfservice-instance | ACTIVE | selfservice=172.16.1.118 | cirros | m1.nano |
| b801ccd4-1a69-4c97-9b66-91dc7d50be2b | test | ACTIVE | selfservice=172.16.1.186, 10.10.128.105 | cirros | m1.nano |
+--------------------------------------+----------------------+--------+-----------------------------------------+--------+---------+
创建浮动IP
[root@controller ~]# openstack floating ip create provider
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2020-06-30T06:14:11Z |
| description | |
| dns_domain | None |
| dns_name | None |
| fixed_ip_address | None |
| floating_ip_address | 10.10.128.116 |
| floating_network_id | b66bf1a4-1f9b-40ea-9dfe-0a72fdc40685 |
| id | 89524eb8-5bb2-4052-944c-49a0695e6d14 |
| location | Munch({'project': Munch({'domain_name': 'Default', 'domain_id': None, 'name': 'admin', 'id': u'86fc1bb169a443f98fdaf8e2fb25f9cd'}), 'cloud': '', 'region_name': '', 'zone': None}) |
| name | 10.10.128.116 |
| port_details | None |
| port_id | None |
| project_id | 86fc1bb169a443f98fdaf8e2fb25f9cd |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| updated_at | 2020-06-30T06:14:11Z |
+---------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
将浮动IP绑定到selfservice-instance实例
[root@controller ~]# openstack server add floating ip selfservice-instance 10.10.128.116
[root@controller ~]# openstack server list
+--------------------------------------+----------------------+--------+-----------------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+----------------------+--------+-----------------------------------------+--------+---------+
| fc8355a8-8035-4079-9023-8b79ced74490 | selfservice-instance | ACTIVE | selfservice=172.16.1.118, 10.10.128.116 | cirros | m1.nano |
| b801ccd4-1a69-4c97-9b66-91dc7d50be2b | test | ACTIVE | selfservice=172.16.1.186, 10.10.128.105 | cirros | m1.nano |
+--------------------------------------+----------------------+--------+-----------------------------------------+--------+---------+
可以通过远程连接工具或ssh命令行连接测试,选择的cirros镜像。默认用户名:cirros
密码:gocubsgo
[root@controller ~]# ssh [email protected]
The authenticity of host '10.10.128.116 (10.10.128.116)' can't be established.
ECDSA key fingerprint is SHA256:SeWDL1ypr0s+jcmzUji9vx1umCkM9SXe/SqW8ItEKN0.
ECDSA key fingerprint is MD5:43:f8:67:17:eb:20:0d:40:55:74:f6:49:90:f1:de:2e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.10.128.116' (ECDSA) to the list of known hosts.
[email protected]'s password:
$ ip a
1: lo: mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1450 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:bc:0b:7a brd ff:ff:ff:ff:ff:ff
inet 172.16.1.118/24 brd 172.16.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:febc:b7a/64 scope link
valid_lft forever preferred_lft forever