Apr 092015
 

国内现在做Openstack的培训也开始多起来,不过应该都是采用虚拟机的方式来培训,我个人感觉还是很土的,不能吃自己的狗食。建议日后大家参加Openstack的培训的时候,都问一句,是不是在Openstack下培训Openstack。

上次写过一篇Icehouse版本的。这次Juno版本,基本是一样的过程。不过调整一下顺序。用用户更容易操作。

这次我是使用刻通云平台进行,希望可以做到更加流畅。

基本情况

Snap4

默认是有一个基础网络。我们还是需要创建一个自己的网络,来满足openstack需求。

整理一下

角色 管理网络 虚拟机通讯网络 外部网络
控制节点 eth0(10.0.0.11)   eth1
(192.168.100.11)
网络节点 eth0(10.0.0.21) eth1(10.0.1.21) eth2(192.168.100.21)
计算节点 eth0(10.0.0.31) eth1(10.0.1.21)  

 

文档很清楚,

  1. 网络节点,需要3块网卡。
  2. 控制节点和网络节点,需要外部网络,就是需要所谓的公网的IP
  3. 计算节点是不需要公网IP
  4. 所有的虚拟机访问公网,都是需要经过网络节点。
  5. 192.168.100.0,就相当于公网的IP地址段

根据上图,我们组建我们自己的网络

  1. 创建router
  2. 创建管理网络,公网,虚拟机网络(记得按顺序创建)
  3. 公网连接router
  4. 申请公网IP
  5. 把IP绑定在router

 

Snap5

控制节点

 

网络搭建

创建一个ubuntu14.04的虚拟机,1core,2G内存,应该就够用了。网络,记得设置固定IP地址

Snap6

为了登陆vnc,所以选择密码登陆

Snap7

最后一步

Snap8

虚拟机就创建完毕。

Snap9

我们需要给控制节点添加一块网卡,连接到公网

Snap10

看看拓扑图

Snap11

vnc登陆

由于虚拟机获得两块网卡,默认网关就需要手工指定。

route add default gw 192.168.100.1

这个时候,你就可以访问外网.

 

从远程访问虚拟机,可以通过端口映射,或者vpn,这里直接在router上设置端口映射。这样你直接sshrouter的IP地址,就可以访问。

 

设置源

apt-get install ubuntu-cloud-keyring
echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
  "trusty-updates/juno main" > /etc/apt/sources.list.d/cloudarchive-juno.list

更新

apt-get update && apt-get dist-upgrade

NTP服务器

apt-get install -y ntp

数据库

apt-get install mariadb-server python-mysqldb

修改/etc/mysql/my.cnf

bind-address = 10.0.0.11

[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

重启数据库

service mysql restart

 

消息队列RabbitMQ

apt-get install -y rabbitmq-server

keystone

 

安装

apt-get install -y keystone

设置

创建keystone数据库,都是通过 mysql –u root –p 进入

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

exit;

删除sqllite数据库

rm /var/lib/keystone/keystone.db

编辑 /etc/keystone/keystone.conf

connection = mysql://keystone:KEYSTONE_DBPASS@10.0.0.11/keystone

[DEFAULT]
admin_token=ADMIN
log_dir=/var/log/keystone

初始化keystone数据库

service keystone restart
keystone-manage db_sync

设置环境变量

export OS_SERVICE_TOKEN=ADMIN
export OS_SERVICE_ENDPOINT=http://10.0.0.11:35357/v2.0

创建管理员权力的用户

keystone user-create --name=admin --pass=admin_pass --email=admin@domain.com
keystone role-create --name=admin
keystone role-create --name=_member_
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin

创建普通用户

keystone user-create --name=demo --pass=demo_pass --email=demo@domain.com
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo

创建 service 租户

keystone tenant-create --name=service --description="Service Tenant"

定义服务的API的endpoint

 

keystone service-create --name=keystone --type=identity --description="OpenStack Identity"

创建endpoint

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://192.168.100.11:5000/v2.0 \
--internalurl=http://10.0.0.11:5000/v2.0 \
--adminurl=http://10.0.0.11:35357/v2.0

检测keystone

通过下面命令检查keystone的初始化是否正常

设置环境变量,创建creds 和 admin_creds 两个文件

cat <<EOF >>/root/creds
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_AUTH_URL="http://192.168.100.11:5000/v2.0/"
EOF
cat <<EOF >>/root/admin_creds
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://10.0.0.11:35357/v2.0
EOF

检测

先退出ssh,让以前设置的环境变量失效。再登陆。

设置环境变量才能进行下面操作

source creds

 

这样就可以

root@controller:~# keystone user-list
+----------------------------------+-------+---------+------------------+
|                id                |  name | enabled |      email       |
+----------------------------------+-------+---------+------------------+
| 6f8bcafd62ec4e23ab2be28016829f91 | admin |   True  | admin@domain.com |
| 66713a75b7c14f73a1c5a015241f5826 |  demo |   True  | demo@domain.com  |
+----------------------------------+-------+---------+------------------+
root@controller:~# keystone role-list
+----------------------------------+----------+
|                id                |   name   |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| cd8dec7752d24a028f95657556f7573d |  admin   |
+----------------------------------+----------+
root@controller:~# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| efc81990ab4c433f94573e2e0fcf08c3 |  admin  |   True  |
| be10dc11d4034b389bef8bbcec657f6f |   demo  |   True  |
| cb45c886bc094f65940ba29d79eab8aa | service |   True  |
+----------------------------------+---------+---------+

查看日志

日志在/var/log/keystone/ 下,先清空日志,看看日志是否还有错误信息.

echo "" > /var/log/keystone/keystone-all.log
echo "" > /var/log/keystone/keystone-manage.log
tail  /var/log/keystone/*

 

Glance

Openstack组件安装,都比较类似。

apt-get install -y glance python-glanceclient

创建数据库 mysql –uroot –p

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

exit;

keystone创建glance用户和服务

keystone user-create --name=glance --pass=service_pass --email=glance@domain.com
keystone user-role-add --user=glance --tenant=service --role=admin

设置endpoint

keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://192.168.100.11:9292 \
--internalurl=http://10.0.0.11:9292 \
--adminurl=http://10.0.0.11:9292

 

编辑 /etc/glance/glance-api.conf

[database]
connection = mysql://glance:GLANCE_DBPASS@10.0.0.11/glance

[DEFAULT]
rpc_backend = rabbit
rabbit_host = 10.0.0.11

[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
identity_uri = http://10.0.0.11:35357
admin_tenant_name = service
admin_user = glance
admin_password = service_pass

[paste_deploy]
flavor = keystone

编辑 /etc/glance/glance-registry.conf

[database]
# The file name to use with SQLite (string value)
#sqlite_db = /var/lib/glance/glance.sqlite
connection = mysql://glance:GLANCE_DBPASS@10.0.0.11/glance


[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass

[paste_deploy]
flavor = keystone

重启服务

service glance-api restart; service glance-registry restart

初始化glance数据库

glance-manage db_sync

上传镜像

source creds
glance image-create --name "cirros-0.3.2-x86_64" --is-public true \
--container-format bare --disk-format qcow2 \
--location http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

查看镜像

# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| d7a6d71d-4222-44f4-82d0-49c14ba19676 | cirros-0.3.2-x86_64 | qcow2       | bare             | 13167616 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+

检测log

root@controller:~# tail /var/log/glance/*
==> /var/log/glance/api.log <==
2014-09-02 07:07:12.315 2946 WARNING glance.store.base [-] Failed to configure store correctly:
 Store sheepdog could not be configured correctly. Reason:
 Error in store configuration: [Errno 2] No such file or directory Disabling add method.
2014-09-02 07:07:12.316 2946 WARNING glance.store [-] Deprecated: glance.store.
sheepdog.Store not found in `known_store`. 
Stores need to be explicitly enabled in the configuration file.

你会发现log里有类似的所谓错误,这个不是问题。希望glance改进一下这个地方的log。不然让很多新手很郁闷。

 

Nova

安装软件

apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler python-novaclient

创建nova 数据库 mysql –u root –p

CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

exit;

配置keystone

keystone user-create --name=nova --pass=service_pass --email=nova@domain.com
keystone user-role-add --user=nova --tenant=service --role=admin

设置endpoint

keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://192.168.100.11:8774/v2/%\(tenant_id\)s \
--internalurl=http://10.0.0.11:8774/v2/%\(tenant_id\)s \
--adminurl=http://10.0.0.11:8774/v2/%\(tenant_id\)s

编辑 /etc/nova/nova.conf

下面是我的nova.conf 文件的全部内容

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

rpc_backend = rabbit
rabbit_host = 10.0.0.11
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass

[database]
connection = mysql://nova:NOVA_DBPASS@10.0.0.11/nova

删除sqlite数据库

rm /var/lib/nova/nova.sqlite

初始化nova数据库

nova-manage db sync

重启nova相关服务

service nova-api restart
service nova-cert restart
service nova-conductor restart
service nova-consoleauth restart
service nova-novncproxy restart
service nova-scheduler restart

检查

# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        controller                           internal         enabled    🙂   2014-08-26 14:13:08
nova-consoleauth controller                           internal         enabled    🙂   2014-08-26 14:13:08
nova-conductor   controller                           internal         enabled    🙂   2014-08-26 14:13:08
nova-scheduler   controller                           internal         enabled    🙂   2014-08-26 14:13:08

 

Neutron

控制节点,也是需要安装Neutron server

apt-get install -y neutron-server neutron-plugin-ml2

创建Neutron数据库 mysql –u root –p

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO neutron@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO neutron@'%' IDENTIFIED BY 'NEUTRON_DBPASS';

exit;

keystone创建neutron用户和角色

keystone user-create --name=neutron --pass=service_pass --email=neutron@domain.com
keystone user-role-add --user=neutron --tenant=service --role=admin

注册服务和endpoint

keystone service-create --name=neutron --type=network --description="OpenStack Networking"

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ network / {print $2}') \
--publicurl=http://192.168.100.11:9696 \
--internalurl=http://10.0.0.11:9696 \
--adminurl=http://10.0.0.11:9696

编辑 /etc/neutron/neutron.conf,关键的是nova_admin_tenant_id 需要你手工用命令获得,再填写

keystone tenant-list | awk '/ service / { print $2 }'

 

 

#core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin = ml2

# service_plugins =
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering
service_plugins = router

# auth_strategy = keystone
auth_strategy = keystone

# allow_overlapping_ips = False
allow_overlapping_ips = True

rpc_backend = rabbit

rabbit_host = 10.0.0.11

notification_driver = neutron.openstack.common.notifier.rpc_notifier

# ======== neutron nova interactions ==========
# Send notification to nova when port status is active.
notify_nova_on_port_status_changes = True

# Send notifications to nova when port data (fixed_ips/floatingips) change
# so nova can update it's cache.
notify_nova_on_port_data_changes = True

# URL for connection to nova (Only supports one nova region currently).
nova_url = http://10.0.0.11:8774/v2

# Name of nova region to use. Useful if keystone manages more than one region
# nova_region_name =

# Username for connection to nova in admin context
nova_admin_username = nova

# The uuid of the admin nova tenant
nova_admin_tenant_id = cb45c886bc094f65940ba29d79eab8aa

# Password for connection to nova in admin context.
nova_admin_password = service_pass

# Authorization URL for connection to nova in admin context.
nova_admin_auth_url = http://10.0.0.11:35357/v2.0

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#signing_dir = $state_path/keystone-signing
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass

[database]
# This line MUST be changed to actually run the plugin.
# Example:
# connection = mysql://root:pass@127.0.0.1:3306/neutron
# Replace 127.0.0.1 above with the IP address of the database used by the
# main neutron server. (Leave it as is if the database runs on this host.)
#connection = sqlite:////var/lib/neutron/neutron.sqlite
connection = mysql://neutron:NEUTRON_DBPASS@10.0.0.11/neutron

编辑 /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

 

编辑/etc/nova/nova.conf, 让nova支持neutron,在[DEFAULT] 添加

network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.0.0.11:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=service_pass
neutron_admin_auth_url=http://10.0.0.11:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

重启nova服务

service nova-api restart
service nova-scheduler restart
service nova-conductor restart

这里面有一个bug,需要修复 http://www.tuicool.com/articles/vmaiiua

neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno

重启neutron服务

service neutron-server restart

查看log

root@controller:~# tail -f /var/log/neutron/*
2014-09-02 07:27:53.950 5373 WARNING neutron.api.extensions [-] Extension fwaas not supported by any of loaded plugins
2014-09-02 07:27:53.952 5373 WARNING neutron.api.extensions [-] Extension flavor not supported by any of loaded plugins
2014-09-02 07:27:53.962 5373 WARNING neutron.api.extensions [-] Extension lbaas_agent_scheduler not supported by any of loaded plugins
2014-09-02 07:27:53.967 5373 WARNING neutron.api.extensions [-] Extension lbaas not supported by any of loaded plugins
2014-09-02 07:27:53.969 5373 WARNING neutron.api.extensions [-] Extension metering not supported by any of loaded plugins
2014-09-02 07:27:53.973 5373 WARNING neutron.api.extensions [-] Extension port-security not supported by any of loaded plugins
2014-09-02 07:27:53.977 5373 WARNING neutron.api.extensions [-] Extension routed-service-insertion not supported by any of loaded plugins

 

日志里显示找不到插件,这都是正常的。

 

Horizon

Dashboard的安装,倒是比较简单,不需要创建数据库。

apt-get install -y apache2 memcached libapache2-mod-wsgi openstack-dashboard

编辑 /etc/openstack-dashboard/local_settings.py

#ALLOWED_HOSTS = ['horizon.example.com', ]
ALLOWED_HOSTS = ['localhost','192.168.100.11']

#OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_HOST = "10.0.0.11"

重启apache服务

service apache2 restart; service memcached restart

这个时候,你可以通过http://192.168.100.11/horizon

看到登录界面,应该是无法登录。

安装Openstack client端

在控制节点装上Openstack的client端,这样会方便很多,很多Neutron的操作,你都可以进行

apt-get -y install python-openstackclient

 

网络节点

看图理解的更好,这图来自redhat的官方文档。

2476

网络节点需要3块网卡。经常有朋友问,1块网卡是否可以。其实1块网卡肯定也是可以的,不过不利于大家理解。不过大家都很难找到3块网卡的机器,所以在IaaS下来测试,就方便很多。

network

创建一个虚拟机,名字为:network, 删除网卡,并且添加3块网卡。ssh到虚拟机上,默认是无法访问外网的,原因也很简单,没有默认路由,手工添加默认路由就可以。

由于网络节点,比较特殊,我们需要把网卡的Ip设置成固定 /etc/netwrok/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# Source interfaces
# Please check /etc/network/interfaces.d before changing this file
# as interfaces may have been defined in /etc/network/interfaces.d
# NOTE: the primary ethernet device is defined in
# /etc/network/interfaces.d/eth0
# See LP: #1262951
#source /etc/network/interfaces.d/*.cfg
# The management network interface
  auto eth0
  iface eth0 inet static
  address 10.0.0.21
  netmask 255.255.255.0

# VM traffic interface
  auto eth1
  iface eth1 inet static
  address 10.0.1.21
  netmask 255.255.255.0

# The public network interface
 auto eth2
 iface eth2 inet static
 address 192.168.100.21
 netmask 255.255.255.0
 gateway 192.168.100.1
 dns-nameservers 114.114.114.114

 

设置完毕,重启虚拟机。

这个时候,你就可以访问外网,安装包。

apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade

同步时间

apt-get install -y ntp

编辑 /etc/ntp.conf

server 10.0.0.11

重启NTP服务

service ntp restart

安装基础组件

apt-get install -y vlan bridge-utils

编辑 /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

检测

sysctl -p

 

安装Neutron组件

apt-get install -y neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
dnsmasq neutron-l3-agent neutron-dhcp-agent

编辑 /etc/neutron/neutron.conf , 这里修改的内容,比控制节点少很多。

#core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin = ml2

# service_plugins =
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering
service_plugins = router

# The strategy to be used for auth.
# Supported values are 'keystone'(default), 'noauth'.
auth_strategy = keystone

allow_overlapping_ips = True

rpc_backend = neutron.openstack.common.rpc.impl_kombu

rabbit_host = 10.0.0.11

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#signing_dir = $state_path/keystone-signing
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass

编辑 /etc/neutron/l3_agent.ini

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

编辑 /etc/neutron/dhcp_agent.ini

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True

编辑 /etc/neutron/metadata_agent.ini

auth_url = http://10.0.0.11:5000/v2.0
auth_region = regionOne

admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
nova_metadata_ip = 10.0.0.11
metadata_proxy_shared_secret = helloOpenStack

登录控制节点,修改 /etc/nova.conf 在[DEFAULT] 加入下面内容

service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = helloOpenStack

重启nova api服务

service nova-api restart

编辑 /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[ovs]
local_ip = 10.0.1.21
tunnel_type = gre
enable_tunneling = True

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

重启openvswitch

service openvswitch-switch restart

创建br-ex

创建br-ex连接外网,这个不太好理解,看图

大概意思是:我们创建一个bridge br-ex,把br-ex绑定在eth2下,eth2是连接到公网的路由器上的。

Snap1

ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2

下面内容是我操作的结果,大家慢慢理解.

 

编辑 /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# Source interfaces
# Please check /etc/network/interfaces.d before changing this file
# as interfaces may have been defined in /etc/network/interfaces.d
# NOTE: the primary ethernet device is defined in
# /etc/network/interfaces.d/eth0
# See LP: #1262951
#source /etc/network/interfaces.d/*.cfg
# The management network interface
  auto eth0
  iface eth0 inet static
  address 10.0.0.21
  netmask 255.255.255.0

# VM traffic interface
  auto eth1
  iface eth1 inet static
  address 10.0.1.21
  netmask 255.255.255.0

# The public network interface
# auto eth2
# iface eth2 inet static
# address 192.168.100.21
# netmask 255.255.255.0
# gateway 192.168.100.1
# dns-nameservers 114.114.114.114

auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-ex
iface br-ex inet static
address 192.168.100.21
netmask 255.255.255.0
gateway 192.168.100.1
dns-nameservers 114.114.114.114

 

重启虚拟机

替换br-ex和eth2的mac地址

由于网络的限制,目前192.168.100.21和192.168.100.11是无法通讯的,原因是因为出于安全的考虑,对网络访问的mac地址和ip地址做了绑定和限制。

通过ifconfig 查看网卡的mac地址,通过命令,把mac地址互换。

  • br-ex mac 地址 c2:32:7d:cf:9d:4
  • eth2 mac地址 fa:16:3e:80:5d:e6
ip link set eth2 addr c2:32:7d:cf:9d:43
ip link set br-ex addr fa:16:3e:80:5d:e6

 

这个时候,外部网络的IP就可以互相访问。这些修改是临时性的,如果重启neutron服务,mac地址就会恢复。不过我们实验不需要重启服务。这里提供的是临时的方法,后面有彻底解决问题的办法。

 

设置环境变量

cat <<EOF >>/root/creds
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_AUTH_URL="http://192.168.100.11:5000/v2.0/"
EOF

这样你就可以看到安装的agent

source creds
neutron agent-list

 

# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+
| id                                   | agent_type         | host    | alive | admin_state_up |
+--------------------------------------+--------------------+---------+-------+----------------+
| 3a80d2ea-bcf6-4835-b125-55144948024c | Open vSwitch agent | network | 🙂   | True           |
| 4219dd20-c4fd-4586-b2fc-c81bec0015d6 | L3 agent           | network | 🙂   | True           |
| e956687f-a658-4226-a34f-368da61e9e44 | Metadata agent     | network | 🙂   | True           |
| f3e841f8-b803-4134-9ba6-3152c3db5592 | DHCP agent         | network | 🙂   | True           |
+--------------------------------------+--------------------+---------+-------+----------------+

 

计算节点

 

compute

 

创建一个虚拟机,名字为:compute1, 删除网卡,并且添加2块网卡。ssh到虚拟机上.

计算节点默认是不需要接公网,不过由于我需要安装包,必须联网,所以你可以创建完虚拟机后,给虚拟机连接到外部网络,装完系统后,再断开就可以。

route add default gw 192.168.100.1

这个时候,你就可以访问外网,安装包。

apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade

同步时间

apt-get install -y ntp

编辑 /etc/ntp.conf

server 10.0.0.11

重启NTP服务

service ntp restart

安装kvm套件

apt-get install -y kvm libvirt-bin pm-utils

安装计算节点组件

apt-get install -y nova-compute-kvm python-guestfs

让内核只读

dpkg-statoverride  --update --add root root 0644 /boot/vmlinuz-$(uname -r)

创建脚本 /etc/kernel/postinst.d/statoverride

#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}

允许运行

chmod +x /etc/kernel/postinst.d/statoverride

编辑 /etc/nova/nova.conf 文件,添加下面内容

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

auth_strategy = keystone
rpc_backend = rabbit
rabbit_host = 10.0.0.11
my_ip = 10.0.0.31
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://192.168.100.11:6080/vnc_auto.html
glance_host = 10.0.0.11
vif_plugging_is_fatal=false
vif_plugging_timeout=0


[database]
connection = mysql://nova:NOVA_DBPASS@10.0.0.11/nova

[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass

删除sqlite

rm /var/lib/nova/nova.sqlite

重启compute服务

service nova-compute restart

编辑 /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

马上生效

sysctl -p

安装网络组件

apt-get install -y neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent

编辑 /etc/neutron/neutron.conf

#core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin = ml2

# service_plugins =
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering
service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

rpc_backend = neutron.openstack.common.rpc.impl_kombu

rabbit_host = 10.0.0.11

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#signing_dir = $state_path/keystone-signing
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass

编辑  /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[ovs]
local_ip = 10.0.1.31
tunnel_type = gre
enable_tunneling = True

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

重启OVS

service openvswitch-switch restart

再编辑 /etc/nova/nova.conf ,在[DEFAULT]里添加下面

network_api_class = nova.network.neutronv2.api.API
neutron_url = http://10.0.0.11:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = service_pass
neutron_admin_auth_url = http://10.0.0.11:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

编辑 /etc/nova/nova-compute.conf ,修改为使用qemu

[DEFAULT]
compute_driver=libvirt.LibvirtDriver
[libvirt]
virt_type=qemu

重启相关服务

service nova-compute restart
service neutron-plugin-openvswitch-agent restart

安装就全部完成。

登录控制节点

root@controller:~# source creds 
root@controller:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        controller                           internal         enabled    🙂   2014-09-02 10:31:03
nova-conductor   controller                           internal         enabled    🙂   2014-09-02 10:31:04
nova-scheduler   controller                           internal         enabled    🙂   2014-09-02 10:30:58
nova-consoleauth controller                           internal         enabled    🙂   2014-09-02 10:31:00
nova-compute     compute1                             nova             enabled    🙂   2014-09-02 10:30:57
root@controller:~#

 

命令行创建虚拟机

在控制节点上,运行下面的命令就可以。镜像我上面已经上传。下面的操作,你完全可以在Dashboard里进行操作,这里命令行下,了解更加深入。

下面的操作,在控制节点完成。

创建外部网络

source creds

#Create the external network:
neutron net-create ext-net --shared --router:external=True

#Create the subnet for the external network:
neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=192.168.100.101,end=192.168.100.200 \
--disable-dhcp --gateway 192.168.100.1 192.168.100.0/24

给租户创建内部网络

#Create the internal network:
neutron net-create int-net

#Create the subnet for the internal network:
neutron subnet-create int-net --name int-subnet \
--dns-nameserver 114.114.114.114 --gateway 172.16.1.1 172.16.1.0/24

创建路由,并且连接到外部网络

#Create the router:
neutron router-create router1

#Attach the router to the internal subnet:
neutron router-interface-add router1 int-subnet

#Attach the router to the external network by setting it as the gateway:
neutron router-gateway-set router1 ext-net

创建密钥

ssh-keygen

添加公钥

nova keypair-add --pub-key ~/.ssh/id_rsa.pub key1

设置安全组

# Permit ICMP (ping):
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

# Permit secure shell (SSH) access:
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

创建虚拟机

NET_ID=$(neutron net-list | awk '/ int-net / { print $2 }')
nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id=$NET_ID \
--security-group default --key-name key1 instance1

查看虚拟机

nova list

申请公网IP

neutron floatingip-create ext-net

关联floating IP

nova floating-ip-associate instance1 192.168.100.102

这个时候,你会发现你在控制节点上,根本是无法访问 router 192.168.100.101和floating ip 192.168.100.102。

访问虚拟机,你需要登录网络节点上,你可以用下面命令访问虚拟机

# ip netns
qdhcp-bf7f3043-d696-4735-9bc7-8c2e4d95c8d5
qrouter-7e8bbb53-1ea6-4763-a69c-a0c875b5224b

第一个的虚拟机,第二个是路由器

# ip netns exec qdhcp-bf7f3043-d696-4735-9bc7-8c2e4d95c8d5 ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1216 (1.2 KB)  TX bytes:1216 (1.2 KB)

tap1a85db16-da Link encap:Ethernet  HWaddr fa:16:3e:ce:e0:e2  
          inet addr:172.16.1.3  Bcast:172.16.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fece:e0e2/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:415 errors:0 dropped:0 overruns:0 frame:0
          TX packets:105 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:64724 (64.7 KB)  TX bytes:10228 (10.2 KB)

 

Instance-creation

 

访问公网

你可能发现一个很明显的问题,你在网络节点是可以ping 通虚拟机的floating IP,router的IP,不过你在控制节点是无法访问的。

如果希望比较完美,实现虚拟机可以ping通公网,那么需要我们多了解一下内容才行。可以发现全部的流量都是通过192.168.100.21这个端口出去,我们需要设置一下这个端口,运行所有的IP和mac地址通过。

登录网络节点,通过ping 192.168.100.101 和192.168.100.102 ,获得他们的mac地址。

# arp -a
? (10.0.0.11) at fa:16:3e:34:d0:7a [ether] on eth0
? (192.168.100.102) at fa:16:3e:0c:be:cd [ether] on br-ex
? (10.0.1.31) at fa:16:3e:eb:96:1c [ether] on eth1
? (192.168.100.101) at fa:16:3e:0c:be:cd [ether] on br-ex
? (192.168.100.1) at fa:16:3e:c2:a8:a8 [ether] on br-ex

 

下面的操作,你可以在控制节点完成

通过curl获取token

使用token,修改192.168.100.21 port 的allow_address_pairs ,可以顺便把eth2和br-ex也修改,这样就不担心重启服务。

详细的操作,就参考这篇文档就可以。

http://www.chenshake.com/use-the-uos-api/

 

 

vnc访问

如果你登录Horizon,访问虚拟机,vnc可能无法访问,你需要登录uos,修改安全组规则。默认第一个虚拟机使用vnc的端口是6080。或者你全部打开端口。

Snap2

参考资料

http://oddbit.com/rdo-hangout-multinode-packstack-slides/#/

https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst

参考文档 http://blog.oddbit.com/2014/05/23/open-vswitch-and-persistent-ma/

ovs-vsctl操作

root@network:~# ovs-vsctl show
533105dd-bd0d-4af1-a331-c9394fbcb775
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.0.2"
root@network:~# ovs-vsctl add-br br-ex
root@network:~# ovs-vsctl show        
533105dd-bd0d-4af1-a331-c9394fbcb775
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.0.2"
root@network:~# ovs-vsctl add-port br-ex eth2
root@network:~# ovs-vsctl show
533105dd-bd0d-4af1-a331-c9394fbcb775
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.0.2"

网络节点重启服务

service neutron-plugin-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-l3-agent restart
service neutron-metadata-agent restart
service dnsmasq restart

附录

建议采用vpn的方式,我就简单,采用pptp

PPTP,默认拨号连接上去,你本地就无法上网了。你需要设置一下。

Snap13

去掉这个勾就可以了。

这个时候,你拨号连接上去,访问虚拟机的时候,走vpn通道,访问外网,还是走你以前的网络。

你还需要在你的增加一条路由才行,不过在win7,win8,增加路由,需要管理员权限。

按 Windows key + X, 再按 A

这时候,你就调出管理员权限的cmd

添加一条路由

route add 192.168.100.0 mask 255.255.255.0 10.100.100.1

这时候,你就可以ping通那台控制节点的虚拟机。不容易吧。

  35 Responses to “Openstack安装Openstack(Juno版本)”

  1. 沙克老师的blog, 非常给力, 你值得拥有。www.zigame.com

  2. # openssl rand -hex 10

    问下您,文档上这个是干什么用的,看你这边没有,是可有可无的配置吗

  3. 为啥子没有cinder的安装过程呢?现在正在纠结呢,不知道哪种方式的存储连接方式才是正确的

  4. 陈老师,想请教个大问题,资源:3台实体机器各4个网卡,1台h3c5500交换机。openstack用fuel_mirantis_6.0(Nova-Network模式或Neutron VLAN模式)部署都能弄好,
    现如何从办公网络连接openstack里面的各台实例机器,百度了好些几天,没有找到网上有后面弄通外面网络的教程和方法。等待你的回信。

    • fuel网络问题是比较麻烦,我这边是我同事负责搞定。你加入fuel的一个qq群交流,应该就可以找到答案。

  5. 陈老师,在网络节点上我按着您的方法在/etc/network/interfaces中设置了br-ex和对应物理网卡之后,重启服务器之后还是ping不通外网,这有可能是什么问题导致的呢?

  6. 您好,看了文档很有帮助。在最前面网络配置的地方那个表格里,计算节点的虚拟机通讯网络的地址写错了,应该是10.0.1.31吧~

  7. 陈老师,有2个问题请您指点:
    1、controller和neutron节点在生产环境中如何消除单点故障?
    2、neutron单节点是否会成为网络流量的瓶颈?如果是应如何解决?目前我只想到通过端口绑定尽量增加出口带宽

    • 技术已经都解决HA的问题,具体你可以看mirantis的fuel的文档。在kilo版本,原生就解决网络节点单点的问题。

      • 谢谢指导!
        一直用juno,今天看了kilo,发现怎么没有trove了
        陈老师在qemu-kvm下用过gluster吗,我用qemu-img可以通过gluster路径建立img,但在新建虚拟机的时候指定gluster路径会显示无法访问或找不到指定文件?

  8. 在Kilo版的ubuntu安装手册中,网络节点的外网网卡的配置是这样的:

    # The external network interface
    auto INTERFACE_NAME
    iface INTERFACE_NAME inet manual
    up ip link set dev $IFACE up
    down ip link set dev $IFACE down

    我是在3台物理机中安装OpenStack的,网络节点照此配置后,仍然无法ping通外网;
    为什么官方要使用这种方法?

  9. 替换br-ex和eth2的mac地址

    陈老师:这一步我的显示br-ex和eth2的mac相同是怎么回事呀

    • 应该是版本的问题吧,我这边也是br-ex与eth2的mac相同。

      • 嗯,thank you

      • Error: Invalid service catalog service: identity

        在登录dashboard时提示的错误,不知道该怎么解决,还有就是在用命令关联floating ip时提示下面错误,其他都正常
        ERROR (BadRequest): No nw_info cache associated with instance (HTTP 400) (Request-ID: req-a616091e-2391-4b7e-8c57-2602ec46701d)

    • 在“编辑/etc/nova/nova.conf, 让nova支持neutron,在[DEFAULT] 添加”中
      属性“libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver”在
      icehouse、juno和kilo的安装文档中都没有发现。

      “rpc_backend = neutron.openstack.common.rpc.impl_kombu”
      这种属性配置,只是在icehouse的安装文档里有发现;
      在juno和kilo版本中,都已经被统一一成“rpc_backend = rabbit”了。

      “安装基础组件 apt-get install -y vlan bridge-utils”
      这两个组件一定要安装吗?在icehose、juno和kilo的安装文档中都没有提到。

      感觉openstack就是在“玩你”呀。

      • 你提到的这些,我确实不敢肯定,我是根据我以前的文档来修改安装的。应该是以官方最新的文档为准。

  10. 陈老师,我和小伙伴按照官方文档上面用三台机器安装juno(neutron架构) 的openstack,完完全全的按照上面安装,都使用的是centos7.1,但是在使用过程中,虚拟机没有分配到ip,查了dhcp功能什么的都没有问题,日志也看不出来,请问这是怎么回事?

  11. 你好,陈老师
    我是按照官方的文档安装的,创建服务时出了一个问题,我搞了一下午也没有解决,希望您能帮忙看一下。
    root@ubuntu:~# openstack service create –name keystone –description “OpenStack identity” identity
    ERROR: openstack Unable to establish connection to http://controller:35357/v2.0/OS-KSADM/services
    我不确定是哪里出了问题,所以安装文档检查了一遍,/etc/keystone/keystone.conf配置和数据库都看了….
    希望您能帮我看一下,谢谢。

    • 加上 –debug参数,你就可以知道哪里设置有问题。看看自己的环境变量的设置。

      • INFO: urllib3.connectionpool Starting new HTTP connection (1): proxy.cmcc
        ERROR: openstack Unable to establish connection to http://controller:35357/v2.0/OS-KSADM/services
        Traceback (most recent call last):
        ……
        File “/usr/lib/python2.7/dist-packages/keystoneclient/session.py”, line 417, in _send_request
        raise exceptions.ConnectionRefused(msg)
        ConnectionRefused: Unable to establish connection to http://controller:35357/v2.0/OS-KSADM/services
        DEBUG: openstackclient.shell clean_up CreateService
        DEBUG: openstackclient.shell got an error: Unable to establish connection to http://controller:35357/v2.0/OS-KSADM/services
        ERROR: openstackclient.shell Traceback (most recent call last):
        File “/usr/lib/python2.7/dist-packages/openstackclient/shell.py”, line 176, in run
        return super(OpenStackShell, self).run(argv)
        ….
        陈老师,部分信息是上面那样的。公司的网是通过代理上网的,所以说我的ubuntu也是设置了代理的:.bashrc文件设置的,是不是和这个代理有关系啊,谢谢。

    • 我也有遇到同样的问题,后来发现是设置的http_proxy这个环境变量,unset http_proxy后OK

  12. 陈老师,您好
    openstack的kilo版本.创建的云磁盘不能给虚拟机挂载。
    日志
    1 控制节点cinder的api.log发现有如下错误
    OSError: [Errno 13] Permission denied: ‘/var/lock/cinder’
    ] Permission denied
    控制节点和cinder节点都没有此相关目录
    2 检查控制节点的var/log下面的 scheduler.log,发现有如下错误:
    2015-09-19 13:02:59.141 1141 ERROR oslo_messaging._drivers.impl_rabbit [req-81017b1e-4e3d-4a52-8804-b4f7fc0fc8b7 eff3c37ef12e4aa4aca14b2e4b452276 46d8ac56f67f41c08012f4fd1c0786ef – – -] AMQP server on controller:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.
    2015-09-19 13:03:00.163 1141 INFO oslo_messaging._drivers.impl_rabbit [req-81017b1e-4e3d-4a52-8804-b4f7fc0fc8b7 eff3c37ef12e4aa4aca14b2e4b452276 46d8ac56f67f41c08012f4fd1c0786ef – – -] Reconnected to AMQP server on controller:5672
    我重启了rabbit服务 还是挂载不了磁盘。

  13. 陈老师你好

    我想问一下,在我自己搭建的icehouse版的环境中,启动了neutron-vpn-agent,vpn服务是正常的。但是我看了日志,发现在vpn agent的日志中,它干了一些l3干的事,比如在新建router的interface port,本来需要l3在网络节点和对应的命名空间里面新建veth pair,但是有时候是vpn agent拿到这个rpc,由它来做,问题是有时候vpn agent拿到rpc之后又不执行相应的命令去新建veth pair。把vpn agent down掉之后上述问题不会出现;而如果把l3 down掉,vpn还是可以接管所有l3的过,所有l3的功能都正常,不会出现上面的错误。请问这个是vpn agent和l3 agent有冲突吗?还是跑了vpn agent就不用跑l3 agent了(devstack的liberty就是只有vpn agent)?

    谢谢~

  14. 陈老师你好,我按照您的步骤在安装完网络节点后,没有看到agent。这个是什么原因呢?

    • 这个地方处理比较麻烦。http://www.chenshake.com/use-the-uos-api/

      • 我是在vmware vsphere的平台上做的实验,只使用了两个网卡来配置neu的,会是这个问题嘛?

  15. 陈老师,请问一下,我用kilo版本的安装在debian8上面可以吗?因为官方没有提供在debian下的安装文档,我安装icehouse版本的安装的,但是到了nova-compute就报“ImportError: Class LibvirtDriver cannot be found ([‘Traceback (most recent call last):\n’, ‘ File “/usr/lib/python2.7/dist-packages/oslo/utils/importutils.py”, line 29, in import_class\n return getattr(sys.modules[mod_str], class_str)\n’, “AttributeError: ‘module’ object has no attribute ‘LibvirtDriver’\n”])”这样的错误,不知道是系统的版本还是什么问题,请陈老师指教,谢谢!

  16. 看完之后,还是不知道怎么安装,好晕

 Leave a Reply

(required)

(required)