Openstack-Queens计算服务·扩展集成Ceph的计算节点

部署计算节点时,需要先检查以下配置是否正确,具体配置方法在《Openstack-Queens双节点模式·基础环境介绍》中有详细介绍。

  • 关闭系统防火墙
  • 配置yum源
  • 配置主机解析
  • 配置时钟同步
  • 安装应用

集成ceph的计算节点需要安装的应用列表如下:

  • centos-release-openstack-queens
  • python-openstackclient
  • openstack-neutron-linuxbridge
  • openstack-nova-compute
  • ceph
  • ceph-radosgw

依次安装以上应用后,开始对新计算节点上的服务进行配置,配置方法与之前介绍过的完全相同,只不过是修改配置文件中计算节点的IP、同步ceph密钥及文件等。

配置网络服务·neutron

配置 neutron

vi /etc/neutron/neutron.conf

[DEFAULT]
bind_host = 10.10.100.152
auth_strategy = keystone
transport_url = rabbit://openstack:Openstack123@controller:5672

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = Neutron123

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

配置 linuxbridge agent

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens34

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]
#如果不使用vxlan,注释以下三项配置,或enable_vxlan = False
enable_vxlan = true
local_ip = 10.10.100.152
l2_population = true

配置 nova

vi /etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
region_name = RegionOne
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = Neutron123

如果之前配置了网格过滤,则新同样需要确保操作系统内核支持网桥过滤器(参照控制节点中的方法)。

启动应用

systemctl enable neutron-linuxbridge-agent

systemctl start neutron-linuxbridge-agent

在控制节点检查结果

#source /opt/admin

openstack network agent list

配置计算服务·nova-compute

配置 nova-compute

vi /etc/nova/nova.conf

[DEFAULT]
my_ip = 10.10.100.152
enabled_apis = osapi_compute,metadata
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:Openstack123@controller:5672

[api]
auth_strategy = keystone

[glance]
api_servers = http://controller:9292

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = Nova123

[libvirt]
#如果CPU不支持虚拟化
#virt_type = qemu
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
auth_url = http://controller:35357/v3
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = Placement123

[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://$my_ip:6080/vnc_auto.html

启动应用

systemctl enable libvirtd openstack-nova-compute

systemctl start libvirtd openstack-nova-compute

在控制节点检查结果

openstack compute service list --service nova-compute

配置存储服务·ceph

配置用户认证

同步密钥,在ceph-deploy节点执行如下命令:

ceph auth get-key client.cinder | ssh 10.10.100.152 tee clinet.cinder.key

从原计算节点(10.10.100.151)上copy如下文件至新节点:

scp /root/secret.xml root@10.10.100.152:/root

scp /etc/ceph/ceph.client.cinder.keyring root@10.10.100.152:/etc/ceph/

或者在新计算节点重新生成secret.xml文件至root目录,uuid必须统一:

cat > secret.xml << EOF

<secret ephemeral='no' private='no'>

<uuid>d9de3482-448c-4fc4-8ccc-f32e00b8764e</uuid>

<usage type='ceph'>

<name>client.cinder secret</name>

</usage>

</secret>

EOF

然后在新计算节点下执行如下命令(注意文件路径):

virsh secret-define --file secret.xml

virsh secret-set-value --secret d9de3482-448c-4fc4-8ccc-f32e00b8764e --base64 $(cat clinet.cinder)

配置 nova-compute

/etc/nova/nova.conf

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = d9de3482-448c-4fc4-8ccc-f32e00b8764e

配置完毕后,重启nova-compute即可:

systemctl restart openstack-nova-compute

赞 (0) 打赏

评论 0

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

感谢您的支持与帮助

支付宝扫一扫打赏

微信扫一扫打赏