openstack真的是三个百般痛楚的事物,管理网络eth0和虚拟机实例互连网eth1

     
 openstack真的是3个可怜伤心的东西,辛亏有自动布署工具,纵然有机动布置工具得以便宜大家配备使用,不过读书的话,第②遍最权威动安插,因为手动安插更能大家询问openstack的做事流程和各组建之间的牵连。


       系统平台cnetos6.7 X86

Openstack Mitaka安装配备教程


       openstack icehouse

壹 、实验环境:

系统:centos7.2-minimal

网络:管理互连网eth0和虚拟机实例网络eth1

controller:192.168.22.202 eth0

                       192.168.30.202 eth1

Compute01:192.168.22.203 eth0

                           192.168.30.203 eth1


     
小编是安分守己openstack的原版手册安装的,安装keystone,glance和compute都很顺遂,不过到了neutron的时候就难受了,google了弹指间有关neutron的稿子,全是说又何其多么的错综复杂,对于三个新手来说真的是二个低度的打击啊。(不能,照旧要一步一步的走下来)。在那些历程中败诉了无数14回,最后弄了两周终于弄好了。

② 、环境布置:

一 、全数节点关闭Firewalls、NetworkMananger、selinux、主机名为独家节点名称

二 、安装时间同步服务器chrony

#Yum install chrony –y

三 、在控制节点上安顿:allow 192.168.21.0/22

四 、在测算节点上同步控制节点时间:server controller iburst

⑤ 、运营服务并开机自动运转:

#systemctl enable chronyd.service

#systemctl start chronyd.service

陆 、准备Ali源、epel源

#yum install -y centos-release-openstack-mitaka

#yum
install https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-6.noarch.rpm
-y

#yum install python-openstackclient  -y                          
 ####设置opentack必须的插件####

#yum install openstack-selinux -y

#yum upgrade

#reboot

七 、数据库安装(mariadb)       ####controller###

#yum install mariadb mariadb-serverpython2-PyMySQL -y

######数据库配置######

###创办并编辑:/etc/my.cnf.d/openstack.cnf

[mysqld]

default-storage-engine = innodb

innodb_file_per_table

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

######开发银行服务######

# systemctl enable mariadb.service

# systemctl start mariadb.service

######初叶化数据库######

#mysql_secure_installation

####留意查看端口是不是业已起步:netstat -lnp | grep 3306###

8、rabbitmq安装(rabbitmq使用5672端口) ##controller##

# yum install rabbitmq-server -y                    
###安装###

# systemctl enable rabbitmq-server.service                  
###开机运转###

# systemctl start rabbitmq-server.service                        
###伊始服务###

#rabbitmqctl
add_user
 openstack zx123456  
               ###日增openstack用户,并设置密码为zx123456###

#rabbitmqctl
set_permissions
openstack
 “.*” “.*” “.*”    
         ###新增用户安装权限###

九 、memcached安装(使用端口11211)   ##controller##

# yum install memcached python-memcached -y                        
 ###安装###

# systemctl enable memcached.service                  
###开机运行###

# systemctl start memcached.service                      
 ###运维服务###

10、keystone安装 ##controller##

######登录数据库并创办keystone数据库:

#mysql -uroot –pzx123456

CREATE DATABASE
keystone;

GRANT ALL PRIVILEGES ON keystone.*
TO
 ‘keystone’@’localhost’
IDENTIFIED BY ‘zx123456’;

GRANTALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY
‘zx123456’;

       ###设置授权用户和密码###

**生成admin_token的随机值:openssl
rand -hex 10
**

# yum install openstack-keystone httpd mod_wsgi -y          
 ##controller##

配置:vi /etc/keystone/keystone.conf

admin_token=随机值(首要为平安,也能够不用替换)

connection=
mysql+pymysql://keystone:zx123456@192.168.22.202/keystone

provider = fernet

#初始化身份认证服务的数据库:

#su -s /bin/sh -c
“keystone-manage
 db_sync”
keystone

#初始化Fernet keys:

#keystone-manage
fernet_setup
–keystone-user
 keystone
–keystone-group keystone

#配置Apache HTTP服务

配置:/etc/httpd/conf/httpd.conf

ServerName controller

用上面包车型大巴情节创造文件/etc/httpd/conf.d/wsgi-keystone.conf

Listen
5000

Listen 35357

WSGIDaemonProcess keystone-public processes=5 threads=1
user=keystonegroup=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

Require all granted

WSGIDaemonProcess keystone-admin processes=5 threads=1
user=keystonegroup=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

Require all granted

启动Apache HTTP服务:

# systemctl enable httpd.service

# systemctl start httpd.service

#开创服务实体和API端点

布置认证令牌:

#export
OS_TOKEN=
2e8cd090b7b50499d5f9

配备端点U福睿斯L:

#export OS_URL=export

#OS_URL=http://controller:35357/v3

安排认证API版本:

#export
OS_IDENTITY_API_VERSION=3

#开创服务实体和身价认证服务:

#openstack service create –name keystone–description “OpenStack
Identity” identity

#创立认证服务的API端点:

#openstack endpoint create –region
RegionOne
 identity
public http://controller:5000/v3

#openstack endpoint create –region
RegionOne
 identity
internal http://controller:5000/v3

#openstack endpoint create –region
RegionOne
 identity
admin http://controller:35357/v3

#创建域、项目、用户、角色

创建域“default”

#openstack domain create –description”Default Domain”
default

创建admin项目

#openstack project create –domain default–description “Admin
Project” admin

创建admin用户

#openstack user create –domain
default
 –password-prompt
admin

 ##提示输入admin用户密码##

创建admin角色

openstack role create
admin

添加“admin“剧中人物到admin项目和用户上

openstack role add
–project admin –user
adminadmin

创建“service“项目

openstack project
create –domain default –description “Service Project” service

创建“demo“项目

openstack project
create –domain default –description “Demo Project” demo

创建“demo“用户

openstack user
create –domain default –password-prompt demo

##提示输入demo用户密码##

创建user角色

openstack role
create user

添加”user”角色到“demo “项目和用户

openstack
role add –project demo –user demo
user

验证:

关门一时半刻认证令牌机制:

编辑/etc/keystone/keystone-paste.ini文件,从“[pipeline:public_api]“,[pipeline:admin_api]“和“[pipeline:api_v3]“某些删除“admin_token_auth

重置“OS_TOKEN“和“OS_URL“环境变量

unset OS_TOKEN
OS_URL

动用admin用户来,检查测试,看好欠好获得令牌:

#openstack–os-auth-url
http://controller:35357/v3–os-project-domain-name default
–os-user-domain-namedefault–os-project-name admin–os-username admin
token issu
e

图片 1

新建admin项目和demo项目标环境变量

admin项目:添加如下内容

vim admin-openrc

export
OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=zx123456

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

demo项目:

vim demo-openrc

export
OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=zx123456

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

加载环境变量并获取令牌:

#source admin-openrc

#openstack token issue

图片 2


一 、注意事项    

三 、glance安装和布置

控制节点安装glance

壹 、登录MySQL,建库和建用户

mysql -uroot –pzx123456

CREATE DATABASE
glance;
       
 ##创建glance数据库##

GRANT ALL
PRIVILEGES ON glance.* TO ‘glance’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ON glance.* TO’glance’@’%’ IDENTIFIED BY
‘zx123456’;

贰 、建keystone论证连接,使用的用户,密码,剧中人物权限

source admin-openrc

创建glance用户

openstack user
create –domain default –password-prompt glance

##提示输入glance密码##

添加admin角色到glance用户和service项目上

openstack role
add –project service –user glance admin

3、创建“glance“劳动实体

openstack
service create –name glance –description “OpenStack Image”
image

肆 、创立镜像服务的API端点

openstack endpoint
create –region RegionOne image
publichttp://controller:9292

openstack endpoint
create –region RegionOne image
internalhttp://controller:9292

openstack endpoint create
–region RegionOneimage admin
http://controller:9292

5、安装glance包   #controller#

yum install openstack-glance -y

6、glance-api配置

vim /etc/glance/glance-api.conf

[database]

connection =
mysql+pymysql://glance:zx123456@controller/glance

[keystone_authtoken]

auth_url
=
http://controller:5000

auth_url=
http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= glance

password

= zx123456

[paste_deploy]

flavor = keystone***#内定论证机制***

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir=
/var/lib/glance

7、配置/etc/glance/glance-registry.conf

vim /etc/glance/glance-registry.conf

[database]

connection =
mysql+pymysql://glance:zx123456@controller/glance

[keystone_authtoken]

auth_uri =
http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = zx123456

[paste_deploy]

flavor = keystone

⑧ 、新建保存镜象目录,并改变属主

mkdir
/var/lib/glance

chown glance.
/var/lib/glance

九 、生成数据库结构

su -s /bin/sh -c
“glance-managedb_sync”
glance

⑩ 、设置开机运行和平运动行

#systemctl enable
openstack-glance-api.service openstack-glance-registry.service

#systemctl
start
 openstack-glance-api.service
openstack-glance-registry.service

翻看服务end point音讯

#openstack catalog
list

证实际操作作

#source admin-openrc

#wgethttp://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86\_64-disk.img

##下载镜像##

openstack image create
“cirros” –file cirros-0.3.4-x86_64-disk.img–disk-format qcow2
–container-format bare
–public

##上传镜像##

openstack image list     ##查阅结果##


壹 、Neutron的配备文件中要把auth_uri换成identity_uri;(别的服务能够用auth_url,但是neutron服务必供给改为identity_url,不然不可能健康运作) 

肆 、nova服务安装与安顿

决定节点

一 、建数据库,连库使用的用户名和密码

mysql -uroot -pzx123456

CREATEDATABASE
nova_api;

CREATE DATABASE nova;

GRANT ALL
PRIVILEGES ON nova_api.* TO ‘nova’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ONnova_api.* TO ‘nova’@’%’ IDENTIFIED BY
‘zx123456’;

GRANTALL PRIVILEGES ON
nova.* TO ‘nova’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ONnova.* TO ‘nova’@’%’ \IDENTIFIED BY
‘zx123456’;

flush privileges;

② 、检查实施结果

select user,host from
mysql.user where user=”nova”;

③ 、建服务实体,keystone用户,剧中人物关系

建nova服务实体

openstack service create
–name nova –description “OpenStack Compute”
compute

建用户

openstack user create
–domain default –password-prompt
nova

##指示输入NOVA密码##

用户,剧中人物,项目涉及

openstack role add
–project service –user nova
admin

建keystone-api对外的端点

openstack endpoint create
–region RegionOne compute
publichttp://controller:8774/v2.1/%\\(tenant\_id\\)s

openstack endpoint create
–region RegionOne compute
internalhttp://controller:8774/v2.1/%\\(tenant\_id\\)s

openstack endpoint create
–region RegionOne compute admin
http://controller:8774/v2.1/%\\(tenant\_id\\)s

四 、查看结果

openstack catalog
list

5、安装nova软件包

yum installopenstack-nova-api openstack-nova-conductor
openstack-nova-consoleopenstack-nova-novncproxy openstack-nova-scheduler
-y

六 、修改nova配置文件

vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis=
osapi_compute,metadata

rpc_backend= rabbit

auth_strategy= keystone

my_ip= 192.168.22.202

use_neutron= True

firewall_driver= nova.virt.firewall.NoopFirewallDriver

[api_database]

connection =
mysql+pymysql://nova:zx123456@controller/nova_api

[database]

#nova连数据库.

connection =
mysql+pymysql://nova:zx123456@controller/nova

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456

[keystone_authtoken]

#keystone论证连接装置

auth_url=http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= nova

password= zx123456

[glance]

api_servers=
http://controller:9292

[vnc]

vncserver_listen=
192.168.22.202

vncserver_proxyclient_address=
192.168.22.202

[oslo_concurrency]

#设置锁文件地点

lock_path=
/var/lib/nova/tmp

柒 、同步数据库

su -s /bin/sh -c”nova-manage api_db sync” nova su-s /bin/sh -c
“nova-manage db sync” nova

##告诫新闻方可忽略##

8、验证

mysql –uroot –pzx123456

use nova;

show tables;

⑨ 、运营服务并开机自启

#systemctl
enable
 openstack-nova-api.service openstack-nova-consoleauth.service

openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl startopenstack-nova-api.service \

openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service
\

openstack-nova-conductor.service openstack-nova-novncproxy.service

测算节点

① 、nova-compute服务安装

yum installopenstack-nova-compute –y

贰 、修改配置文件

vim /etc/nova/nova.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

#算算节点ip

my_ip = 192.168.22.203

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password=
zx123456

[vnc]

enabled = True

vncserver_listen=
0.0.0.0

#计量节点管理网络ip

vncserver_proxyclient_address= 192.168.22.203

novncproxy_base_url=
http://192.168.22.202:6080/vnc\_auto.html

[glance]

api_servers = http://controller:9292

[oslo_concurrency]

#锁文件

lock_path = /var/lib/nova/tmp

egrep-c ‘(vmx|svm)’
/proc/cpuinfo

##规定你的猜度节点是还是不是辅助虚拟机的硬件加快##

重临0,则供给配备上面:

[libvirt]

virt_type = qemu

③ 、运行服务

#systemctl
enable
 libvirtd.service
openstack-nova-compute.service

# systemctl startlibvirtd.service openstack-nova-compute.service

证实操作

在controller执行上面发号施令:

#source /root/admin-openrc

#openstack compute servicelist

图片 3


② 、各安排文件属组应该为对应的服务的运转者用户地点,不然其将无法访问导致服务运行败北;

5、Neutron安装与布局

操纵节点

壹 、创立neutron数据库并授予权力

mysql –uroot –pzx123456

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ IDENTIFIED BY ‘zx123456’;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ IDENTIFIED BY ‘zx123456’;

贰 、得到admin凭证及成立neutron用户

#source admin-openrc

#openstack user create –domain default –password-prompt neutron

##唤醒输入neutron密码##

3、添加“admin“角色到“neutron“用户**

openstack role add –project service –user neutron admin

肆 、创设“neutron“服务实体

openstack service create –name neutron –description “OpenStack Networking” network

伍 、创设网络服务API端点

openstack endpoint create –region RegionOnenetwork public http://controller:9696

openstack endpoint create –region RegionOnenetwork internal http://controller:9696

openstack endpoint create –region RegionOne

network adminhttp://controller:9696

六 、网络采纳:Self-service network

neutron相关包安装:

yum install openstack-neutronopenstack-neutron-ml2
openstack-neutron-linuxbridge ebtables –y

七 、neutron服务配置文件

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

vim /etc/neutron/neutron.conf

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

[database]

connection = mysql+pymysql://neutron:zx123456@controller/neutron  #改为和谐数据库密码

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456   #改为rabbitmq的密码

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = zx123456   #改为友好neutron服务的密码

[nova]

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = zx123456  #改为团结nova服务的密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

ML2插件的安顿:

mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = *

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = True

linuxbridge agent配置文件

mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:eht1   #此地设置为provider网络的网卡名称,小编这里eth1

[vxlan]

enable_vxlan = True

local_ip = 192.168.22.202  #这些ip地址大家接纳的是管理网段的ip (192.168.22.202)

l2_population = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

L3 agent配置文件:

mv /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak

vim /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

external_network_bridge =#留空

dhcp agent配置

mv /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak

vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = True

配置metadata agent

mv /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak

vim /etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_ip = controller

metadata_proxy_shared_secret = zx123456#修改为协调的METADATA_SECRET,也能够不改动,要和nova服务配置一样

安排nova服务使用network

vim /etc/nova/nova.conf#日增以下内容

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = zx123456#改为协调neutron服务密码

service_metadata_proxy= True

metadata_proxy_shared_secret= zx123456   #和地方的METADATA对应

捌 、给ML2插件做个软连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

九 、同步数据库

su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf  –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron

10、重启nova-api

systemctl restart openstack-nova-api.service

1壹 、运转neutron相关服务,并设置开机运转

systemctl enable neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service

neutron-l3-agent.service

# systemctl start neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.serviceneutron-l3-agent.service

compute节点配置

1、安装neutron服务

yum install openstack-neutron-linuxbridge ebtables ipset

2、配置

neutron服务配置

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

vim /etc/neutron/neutron.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456   #改为rabbit密码

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = zx123456        #改为温馨neutron服务密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

linuxbridge agent配置

mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:eth1  #改为provider互联网的网卡,那里是eth1

[vxlan]

enable_vxlan = True

local_ip = 192.168.22.203#改为本机managent网络的ip地址192.168.22.203

l2_population = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

nova服务应用network

vim  /etc/nova/nova.conf  #追加以下内容

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = zx123456    #改为和谐的neutron服务密码

3、重启nova服务

systemctl restart openstack-nova-compute.service

4、启动neutron

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

验证

在controller节点上执行:

source /root/admin-openrc

neutron ext-list

图片 4


neutron agent-list

图片 5


Neutron服务安装完结。

 

陆 、Dashboard安装配置

操纵节点

壹 、安装面板

yum installopenstack-dashboard –y

二 、配置相应设置

vim /etc/openstack-dashboard/local_settings

修改如下配置:

OPENSTACK_HOST =”controller”

ALLOWED_HOSTS = [‘*’, ]

CACHES = {

‘default’: {

‘BACKEND’:’django.core.cache.backends.locmem.LocMemCache’,

‘LOCATION’: ‘192.168.22.202:11211’,

},

}

OPENSTACK_KEYSTONE_URL =”http://%s:5000/v3” % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT= True

OPENSTACK_API_VERSIONS = {

“identity”: 3,

“image”: 2,

“volume”: 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN= “default”

OPENSTACK_KEYSTONE_DEFAULT_ROLE= “user”

TIME_ZONE = “UTC”

3、重启apache和memcaced服务

systemctl enablehttpd.service memcached.service

systemctl restarthttpd.service memcached.service

systemctl statushttpd.service memcached.service

验证

http://192.168.22.202/dashboard

有关运转虚拟机日志报错NovaException: Unexpected
vif_type=binding_failed.:的化解方案

1.产出上述错误首先检查ml2配置文件是不是配备不错
2.查看网络节点metadata_agent.ini配置文件是或不是错误,metadata是负责将对neutron的操作保存在数据库(metadata_agent配置文件填写错误日志不会报错。eg:将admin_tenant_name
= service 写成 dmin_tenant_name = service)
3.禁止使用虚拟机互连网效能看是不是能够平常运行,假若能运维那么难题出在netron上,假诺也不可能运作那么就要求检讨其余了。

 

安装glance蒙受的问题:

1.布局cinder时要把cinder
volumes上的布局文件中volumes_dir=$state_path/volumes
改为volumes_dir=/etc/cinder/volumes
2.将/etc/rc.d/init.d/openstack-cinder-volume
中的配置文件只保留–config-file $config,删除-config-file
$distconfig,幸免失误
eg: daemon –user cinder –pidfile $pidfile “$exec –config-file $config
–logfile $logfile &>/dev/null & echo \$! > $pidfile”
3.cinder voleme节点配置文件中volume_group =
stack-volumes-lvmdriver-1项表示默认vg为stack-volumes-lvmdriver,运营cinder
volume前必须先创立名为stack-volumes-lvmdriver的卷组。

删去neutron互连网的步调:
1.router-gateway-clear
2.router-interface-delete
3.subnet-delete
4.router-delete

neutron服务遇到的题材:

比方日志不报错但服务不正规,例如实例不可能获得到p。

1.用neutron agent-list 查看各组件工作情形是还是不是日常。

假诺景况不经常请查看各节点时间是还是不是不联合。(日志不报错,但状态不正规基本上都以光阴不一致步造成的)

二、总结

①遇到标题早晚要冷静,不要丢弃,要善用思考。

②openstack的题材一般都是布局文件漏洞百出引起的

③尽量将劳动多重启几遍看是还是不是会报错,有些服务就算起步的时候显得的ok,可是并未运转起来。

④劳务运维后自然要看日志(grep -i ‘error’)

⑤各主机时间必须共同

 

沾满一黄浩然后的美图:

图片 6