Sep 082012
 

Openstack Folsom 安装比较复杂,尤其是Quantum部分,新的内容很多。Quantum的租户网络有两种模式:GRE和VLAN模式,这两种方式配置有很大的区别,一个明显的区别就是控制节点,Vlan模式2块网卡,GRE模式需要3块网卡。这篇文档就是采用GRE模式,控制节点需要3块网卡

英文原文

https://github.com/jedipunkz/openstack_folsom_deploy

https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/stable/GRE/OpenStack_Folsom_Install_Guide_WebVersion.rst

我基本会参考原文做翻译,下面的和原文差异的地方

  1. 我的网络会会原文不同, 文档我会在真实的环境下验证,由于每个人的网络基本都是不一样,所以文档和IP相关的地方,我基本采用变量,比较灵活的方式,大家可以用sed命令实现修改。很多朋友安装不成功,基本都是因为更换IP,导致有地方没修改,所以提供sed的命令修改。
  2. 对原文提供的两个keystone导入数据的脚本做了细微的修改,主要是采用变量,让他更加灵活。
  3. mysql直接采用IP访问,而不是localhost
  4. keystone的token采用随机生成,而不是password

 

文档修改记录

  • 2012年9月8日,文档还在草稿中。
  • 2012年9月11日:完成文档大部分内容,目前quantum的安装包有冲突。等待上游修复。
  • 2012年9月21日:基本完成控制节点的安装,登陆dashboard,创建网络。目前dashboard还是需要http:/ip/horizon 访问。
  • 2012年10月11日: 把控制节点安装好。目前Folsom的源,算是已经正式发布。已经成功登陆dashbaord。
  • 2012年10月12日:加上计算节点,不过目前还是有问题。看不到新加入的计算节点。
  • 2012年10月15日: 基本调试通过.成功创建了第一个虚拟机,不过目前网络还是不通,无法访问.vnc还是有问题.
  • 2012年10月16日:根据Essex版本的vnc设置进行调整,目前vnc已经可以工作。已经给作者反馈 。目前发现metadata不工作,密钥没有注入到虚拟机里。调整了一下nova.conf 文件。
  • 2012年11月2日:目前quantum的网络还是无法正常工作。需要更多的时间去了解和学习。
  • 2012年11月27日:终于可以实现访问虚拟机,不过目前虚拟机还无法访问外网。正在调试中,希望这个星期能完成整个文档。
  • 2012年11月29日:经过多次重复安装,基本已经实现虚拟机的访问。不过目前虚拟机还是无法访问外部网络,估计还是quantum的bug,今天也是Folsom发布第一个补丁包,希望可以在ubuntu集成补丁包后,修复所有相关的bug。目前文档已经基本可用。
  • 2013年1月5日:作者的原文也做了很多调整,我根据调整也校对了一遍文档,改进了一些地方。目前原文已经把控制节点和网络节点分开,这样更有利于理解,不过调整太大,我就不修改。目前就剩下一个主要问题,虚拟机无法访问外部的网络。
  • 2013年1月17日:源已经更新到 Folsom 2012.2.1, 发现确实修复了几个明显的bug,也顺便调整了一下文档. 不过虚拟机无法访问外网的问题,还是没有解决.这个确实很郁闷.

 

介绍

  控制节点(3块网卡) 计算节点(2块网卡)
管理网络(eth0) 10.1.199.53/24 10.1.199.6/24
VMs Networks with OVS in tunnel mode 10.0.0.3/24 10.0.0.4/24
Public Bridge 不需要设置IP  
hostname controller compute1
服务 MySQL
RabbitMQ
Nova
Glance
Keystone
Quantum
kvm
quantum
nova-compute
     

 

要求

  1. 控制节点一定需要3块网卡,计算节点2块网卡,如果测试迁移,那么需要2台计算节点
  2. 机器支持kvm,可以通过运行命令 kvm-ok 检测是否支持
  3. 全部的命令都是在root下运行

 

 

控制节点

操作系统

安装ubuntu 12.04.1 Server版本,最小化安装,只需要安装SSH server就可以。Cinder 需要一个单独的分区或者硬盘。

目前Folsom进入ubuntu 12.04的官方的源,不过需要你手工添加。源的官方说明

cat <<EOF >>/etc/apt/sources.list
deb  http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/folsom main
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main
EOF

运行下面命令

apt-get install ubuntu-cloud-keyring
apt-get update && apt-get -y dist-upgrade

 

Hostname设置

 

# cat /etc/hostname 
controller

# cat /etc/hosts
127.0.0.1       localhost
10.1.199.53      controller.chenshake.com        controller
10.1.199.6      compute1.chenshake.com  compute1

# hostname
controller

# hostname -f
controller.chenshake.com

 

网络

直接设置 /etc/network/interface

root@node53:~# cat /etc/network/interfaces 
# This file describes network interfaces avaiulable on your system
# and how to activate them. For more information, see interfaces(5).
# Modified by convert_static.sh.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.1.199.53
hwaddress ether 00:25:90:2d:7a:42  
netmask 255.255.255.0
network 10.1.199.0
gateway 10.1.199.1
dns-search chenshake.com
dns-nameservers 8.8.8.8

# VMs Networks with OVS in tunnel mode
auto eth1
    iface eth1 inet static
    address 10.0.0.3
    netmask 255.255.255.0

# Public Bridge
auto eth2
    iface eth2 inet manual
    up ifconfig $IFACE 0.0.0.0 up
    up ip link set $IFACE promisc on 
    down ip link set $IFACE promisc off
    down ifconfig $IFACE down

 

重启服务

/etc/init.d/networking restart

设置IP转发

sed -i -r 's/^\s*#(net\.ipv4\.ip_forward=1.*)/\1/' /etc/sysctl.conf
echo 1 > /proc/sys/net/ipv4/ip_forward

检查修改结果

# sysctl -p
net.ipv4.ip_forward = 1

修改完这些,重启机器

查看当前机器的路由

# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.1.199.1      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        *               255.255.255.0   U     0      0        0 eth1
10.1.199.0      *               255.255.255.0   U     0      0        0 eth0

NTP服务器

编辑 /etc/ntp.conf ,在 server ntp.ubuntu.com 下添加两行

server ntp.ubuntu.com
server 127.127.1.0
fudge 127.127.1.0 stratum 10

或者直接运行下面命令

sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf

重启NTP服务

service ntp restart

环境变量

cat >/root/novarc <<EOF
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export MYSQL_PASS=password
export SERVICE_PASSWORD=password
export RABBIT_PASSWORD=password
export FIXED_RANGE=10.0.0.0/24
export FLOATING_RANGE=$(/sbin/ifconfig eth0 | awk '/inet addr/ {print $2}' | cut -f2 -d ":" | awk -F "." '{print $1"."$2"."$3}').224/27
export OS_AUTH_URL="http://localhost:5000/v2.0/"
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=$(openssl rand -hex 10)
export MASTER="$(/sbin/ifconfig eth0 | awk '/inet addr/ {print $2}' | cut -f2 -d ":")"
export LOCAL_IP="$(/sbin/ifconfig eth1 | awk '/inet addr/ {print $2}' | cut -f2 -d ":")"
export OS_TEST_TENANT=bank
export OS_TEST_USER=chenshake
export OS_TEST_NET=bank_net
export OS_TEST_ROUTER=bank_router
export OS_TEST_SUBNET=10.10.10.0/24
EOF

 

你可以根据你的需要调整用户的密码。

source novarc
echo "source novarc">>.bashrc

 

 

Mysql

下面是我们需要用到的数据库

数据库 用户 密码
mysql root password
nova nova password
keystone keystone password
glance glance password
cinder cinder password
quantum quantum password
     

 

安装

设置自动安装,无需输入密码

cat <<MYSQL_PRESEED | debconf-set-selections
mysql-server-5.5 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.5 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.5 mysql-server/start_on_boot boolean true
MYSQL_PRESEED

安装mysql

apt-get -y install mysql-server python-mysqldb curl

设置

运行远程访问mysql

sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf

重启服务

service mysql restart

创建数据库

mysql -uroot -p$MYSQL_PASS <<EOF
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$MYSQL_PASS';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$MYSQL_PASS';
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY '$MYSQL_PASS';
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'IDENTIFIED BY '$MYSQL_PASS';
CREATE DATABASE quantum;
GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'%'IDENTIFIED BY '$MYSQL_PASS';
FLUSH PRIVILEGES;
EOF

RabbitMQ

安装

apt-get -y install rabbitmq-server

设置

修改默认密码

我们把默认密码 guest,改成password

rabbitmqctl change_password guest $RABBIT_PASSWORD

Keystone

安装

apt-get -y install keystone

配置

编辑 /etc/keystone/keystone.conf

[DEFAULT]
admin_token = d111cf2d97251a9e0422
bind_host = 0.0.0.0
public_port = 5000
admin_port = 35357
compute_port = 8774
verbose = True
debug = True
log_file = keystone.log
log_dir = /var/log/keystone
log_config = /etc/keystone/logging.conf
[sql]
connection = mysql://keystone:password@10.1.199.53:3306/keystone
idle_timeout = 200

 

或者直接运行下面脚本

sed -i -e " s/# admin_token = ADMIN/admin_token = $SERVICE_TOKEN/g; s/# bind_host = 0.0.0.0/bind_host = 0.0.0.0/g; s/# public_port = 5000/public_port = 5000/g; s/# admin_port = 35357/admin_port = 35357/g; s/# compute_port = 8774/compute_port = 8774/g; s/# verbose = True/verbose = True/g; s/# idle_timeout/idle_timeout/g" /etc/keystone/keystone.conf

使用mysql数据库

sed -i '/connection = .*/{s|sqlite:///.*|mysql://'"keystone"':'"$MYSQL_PASS"'@'"$MASTER"'/keystone|g}' /etc/keystone/keystone.conf

重启服务和初始化数据库

service keystone restart
keystone-manage db_sync

导入keystone数据

keystone-data.sh

wget http://www.chenshake.com/wp-content/uploads/2012/09/keystone-data.sh_.txt
mv keystone-data.sh_.txt keystone-data.sh
bash keystone-data.sh

 

导入endpoint

keystone-endpoints.sh

wget http://www.chenshake.com/wp-content/uploads/2012/09/keystone-endpoints.sh_.txt
mv keystone-endpoints.sh_.txt keystone-endpoints.sh
bash keystone-endpoints.sh

测试

使用curl测试

curl -d '{"auth": {"tenantName": "admin", "passwordCredentials":{"username": "admin", "password": "password"}}}' -H "Content-type:application/json" http://$MASTER:35357/v2.0/tokens | python -mjson.tool

 

查看log

grep ERROR /var/log/keystone/keystone.log
ps -ef | grep -i keystone-all

 

Glance

安装

apt-get -y install glance

配置

编辑/etc/glance/glance-api.conf 和 /etc/glance/glance-registry.conf ,两个文件,都是修改4个地方

sql_connection = mysql://glance:password@10.1.199.53/glance
admin_tenant_name = service
admin_user = glance
admin_password = password

或者直接运行下面脚本实现

sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/glance/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; " /etc/glance/glance-api.conf  /etc/glance/glance-registry.conf
sed -i '/sql_connection = .*/{s|sqlite:///.*|mysql://'"glance"':'"$MYSQL_PASS"'@'"$MASTER"'/glance|g}' /etc/glance/glance-registry.conf /etc/glance/glance-api.conf

编辑 /etc/glance/glance-api.conf

#notifier_strategy = noop
notifier_strategy = rabbit

#rabbit_password = guest
rabbit_password = password

运行下面命令进行修改

sed -i " s/notifier_strategy = noop/notifier_strategy = rabbit/g;s/rabbit_password = guest/rabbit_password = $RABBIT_PASSWORD/g;" /etc/glance/glance-api.conf

 

运行下面命令

cat <<EOF >>/etc/glance/glance-api.conf
flavor = keystone+cachemanagement
EOF
cat <<EOF >>/etc/glance/glance-registry.conf 
flavor = keystone
EOF

重启服务

service glance-api restart && service glance-registry restart

同步数据库

glance-manage db_sync

下载Image

我们下载CirrOS的image作为测试使用,只有10M。如果是ubuntu官方的image,220M,并且ubuntu官方的image,都是需要使用密钥登陆。

CirrOS

下载image

wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

上传image

glance image-create --name=cirros-0.3.0-x86_64 --public  --container-format=bare \
--disk-format=qcow2 < /root/cirros-0.3.0-x86_64-disk.img

Cirros,是可以使用用户名和密码登陆,也可以使用密钥登陆

user:cirros
password:cubswin:)

 

Ubuntu官方image

下载image

wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

上传image

glance image-create --name="Ubuntu 12.04 cloudimg amd64" --public \
--container-format=ovf --disk-format=qcow2 < /root/precise-server-cloudimg-amd64-disk1.img

user:ubuntu

只能使用密钥登陆。

测试

查看image

glance image-list

 

查看image详细信息

glance image-show 12e2b864-9601-4506-b19d-3f663c0b2e15

Open-vSwitch

安装

apt-get install -y openvswitch-switch

配置

设置网络

ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2
ip link set up br-ex

 

大家可以通过下面命令来查看你创建的效果, 具体的用途,正在学习中.

ovs-vsct -h
ovs-vsctl list-br
ovs-vsctl show

查看结果

# ovs-vsctl list-br
br-ex
br-int

# ovs-vsctl show
89742cb3-5d15-4150-a278-a4054ab9c219
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth2"
            Interface "eth2"
    ovs_version: "1.4.0+build0"

 

Quantum

安装

apt-get -y install quantum-server python-cliff \
quantum-plugin-openvswitch-agent \
quantum-l3-agent quantum-dhcp-agent python-pyparsing

配置

编辑 /etc/quantum/quantum.conf

auth_strategy = keystone
fake_rabbit = False
rabbit_host = 10.1.199.53
rabbit_password = password

 

或者运行下面命令

sed -i -e " s/# auth_strategy/auth_strategy/g; s/# fake_rabbit/fake_rabbit/g; s/# rabbit_host = localhost/rabbit_host = $MASTER/g; s/# rabbit_password = guest/rabbit_password = $RABBIT_PASSWORD/g" /etc/quantum/quantum.conf

 

编辑 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

sql_connection = mysql://quantum:password@10.1.199.53:3306/quantum

[OVS]
tenant_network_type = gre 
enable_tunneling = True
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.0.3

或者运行下面命令

sed -i -e " s/# Example: tenant_network_type = gre/tenant_network_type = gre/g; s/# Default: enable_tunneling = False/enable_tunneling = True/g; s/# Example: tunnel_id_ranges = 1:1000/tunnel_id_ranges = 1:1000/g; s/# Default: integration_bridge = br-int/integration_bridge = br-int/g; s/# Default: tunnel_bridge = br-tun/tunnel_bridge = br-tun/g; s/# Default: local_ip =/local_ip = $LOCAL_IP/g" /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

 

修改数据库

sed -i '/sql_connection = .*/{s|sqlite:///.*|mysql://'"quantum"':'"password"'@'"$MASTER"'/quantum|g}' /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

编辑 /etc/quantum/l3_agent.ini 和 /etc/quantum/api-paste.ini

[DEFAULT]
admin_tenant_name = service
admin_user = quantum
admin_password = password

或者运行下面命令

sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/quantum/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; " /etc/quantum/l3_agent.ini
sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/quantum/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; " /etc/quantum/api-paste.ini

编辑 /etc/quantum/l3_agent.ini

debug = True
use_namespaces = False
metadata_ip = 10.1.199.53

或者运行下面命令

sed -i -e " s/# debug = True/debug = True/g; s/# use_namespaces = True/use_namespaces = False/g; s/# metadata_ip =/metadata_ip = $MASTER/g" /etc/quantum/l3_agent.ini

编辑 /etc/quantum/dhcp_agent.ini

use_namespaces = False

或者运行命令

sed -i -e " s/# use_namespaces = True/use_namespaces = False/g; "  /etc/quantum/dhcp_agent.ini

 

重启服务

service quantum-server restart
service quantum-plugin-openvswitch-agent restart
service quantum-dhcp-agent restart
service quantum-l3-agent restart

对于Openstack别的组件,都是需要创建数据库的过程。不过quantum,你只需要重启quantum-server服务,他就会自动创建相关的表。

为demo用户创建网络

利用提供的脚本,为用户 Demo创建一个网络

wget http://www.chenshake.com/wp-content/uploads/2012/09/quantum-networking.sh_.txt
mv quantum-networking.sh_.txt quantum-networking.sh

需要对脚本进行一些修改。

##############################################################
### Public Network ###########################################
##############################################################

# Provider Router Information - what name should 
# this provider have in Quantum?
PROV_ROUTER_NAME="provider-router"

# Name of External Network (Don't change it!)
EXT_NET_NAME="ext_net"

# External Network addressing - our official 
# Internet IP address space
EXT_NET_CIDR="10.1.199.0/24"
EXT_NET_LEN=${EXT_NET_CIDR#*/}

# External bridge that we have configured 
# into l3_agent.ini (Don't change it!)
EXT_NET_BRIDGE=br-ex

# IP of external bridge (br-ex) - this node's 
# IP in our official Internet IP address space:
EXT_GW_IP="10.1.199.13"

# IP of the Public Network Gateway - The 
# default GW in our official Internet IP address space:
EXT_NET_GATEWAY="10.1.199.1"

# Floating IP range
POOL_FLOATING_START="10.1.199.130"      # First public IP to be used for VMs
POOL_FLOATING_END="10.1.199.150"        # Last public IP to be used for VMs 

###############################################################

上面最让人困惑的设置就是:EXT_GW_IP, 这其实是控制节点的eth2的IP地址,不过这个IP地址,不是通过/etc/network/interface 设置,而是通过这个脚本设置。运行完脚本,你就可以ping通这个IP。

改完后,运行脚本

root@node53:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.1.199.1      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        *               255.255.255.0   U     0      0        0 eth1
10.1.199.0      *               255.255.255.0   U     0      0        0 eth0
root@node53:~# bash quantum-networking.sh 
Added interface to router f69ecf3d-d476-433a-82a6-de20614b9d32
Created a new subnet:
+------------------+--------------------------------------------------+
| Field            | Value                                            |
+------------------+--------------------------------------------------+
| allocation_pools | {"start": "10.1.199.130", "end": "10.1.199.150"} |
| cidr             | 10.1.199.0/24                                    |
| dns_nameservers  |                                                  |
| enable_dhcp      | False                                            |
| gateway_ip       | 10.1.199.1                                       |
| host_routes      |                                                  |
| id               | ef65a5bd-39d2-496f-a042-7234b5b8956e             |
| ip_version       | 4                                                |
| name             |                                                  |
| network_id       | 92c05466-6f80-4e5f-bbc3-59987df8d489             |
| tenant_id        | ab38cf34ab0a4a9995c84a53044a2269                 |
+------------------+--------------------------------------------------+
Set gateway for router f69ecf3d-d476-433a-82a6-de20614b9d32
root@node53:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.1.199.1      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        *               255.255.255.0   U     0      0        0 eth1
10.1.199.0      *               255.255.255.0   U     0      0        0 eth0
10.1.199.0      *               255.255.255.0   U     0      0        0 br-ex
10.5.5.0        *               255.255.255.0   U     0      0        0 tap218751e0-6d

修改/etc/quantum/l3_agent.ini,修改 router 和 external network. 我直接使用下面命令修改。

router=$(quantum router-list | awk '/provider-router/ {print $2}')
ext_net=$(quantum net-list | awk '/ext_net/ {print $2}')
sed -i -e " s/# router_id =/router_id = $router/g; s/# gateway_external_network_id =/gateway_external_network_id = $ext_net/g;" /etc/quantum/l3_agent.ini 

 

这个时候,需要重启l3agent服务

service quantum-l3-agent restart

这时候你查看路由表

# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.1.199.1      0.0.0.0         UG    0      0        0 eth0
default         10.1.199.1      0.0.0.0         UG    100    0        0 eth0
10.0.0.0        *               255.255.255.0   U     0      0        0 eth1
10.1.199.0      *               255.255.255.0   U     0      0        0 eth0
10.1.199.0      *               255.255.255.0   U     0      0        0 br-ex
10.1.199.0      *               255.255.255.0   U     0      0        0 qg-fba8f518-45
10.5.5.0        *               255.255.255.0   U     0      0        0 tap218751e0-6d
10.5.5.0        *               255.255.255.0   U     0      0        0 qr-61f55d19-9e

查看IP

可以看到很多信息。

root@node53:~# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:25:90:2d:7a:18 brd ff:ff:ff:ff:ff:ff
    inet 10.1.199.53/24 brd 10.1.199.255 scope global eth0
    inet6 fe80::225:90ff:fe2d:7a18/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:25:90:2d:7a:19 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.3/24 brd 10.0.0.255 scope global eth1
    inet6 fe80::225:90ff:fe2d:7a19/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:25:90:3b:23:c8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::225:90ff:fe3b:23c8/64 scope link 
       valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:25:90:3b:23:c9 brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether ba:03:3e:6c:9b:42 brd ff:ff:ff:ff:ff:ff
7: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 00:25:90:3b:23:c8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.199.13/24 scope global br-ex
8: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
    link/ether ba:43:40:98:d2:4a brd ff:ff:ff:ff:ff:ff
9: tapd84b2276-bc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:a2:88:0e brd ff:ff:ff:ff:ff:ff
    inet 10.5.5.2/24 brd 10.5.5.255 scope global tapd84b2276-bc
    inet6 fe80::f816:3eff:fea2:880e/64 scope link 
       valid_lft forever preferred_lft forever
10: qr-7afa9a7d-be: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:8e:9c:48 brd ff:ff:ff:ff:ff:ff
    inet 10.5.5.1/24 brd 10.5.5.255 scope global qr-7afa9a7d-be
    inet6 fe80::f816:3eff:fe8e:9c48/64 scope link 
       valid_lft forever preferred_lft forever
11: qg-2a8f838e-06: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:b9:75:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.199.130/24 brd 10.1.199.255 scope global qg-2a8f838e-06
    inet6 fe80::f816:3eff:feb9:75b8/64 scope link 
       valid_lft forever preferred_lft forever

Cinder

安装

apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget \
open-iscsi iscsitarget-dkms python-cinderclient

配置

分区

我的硬盘专门一个分区给volume使用

umount /dev/sda5
pvcreate /dev/sda5
vgcreate cinder-volumes /dev/sda5

去掉开机挂载

sed -i '/nova-volume/s/^/#/' /etc/fstab

iscsi

sed -i 's/false/true/g' /etc/default/iscsitarget
service iscsitarget restart
service open-iscsi restart

编辑 /etc/cinder/cinder.conf ,直接运行下面命令就可以.

cat >/etc/cinder/cinder.conf <<EOF
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:$MYSQL_PASS@$MASTER:3306/cinder
iscsi_helper = ietadm 
volume_group = cinder-volumes
rabbit_password= $RABBIT_PASSWORD
logdir=/var/log/cinder
verbose=true
auth_strategy = keystone
EOF

编辑 /etc/cinder/api-paste.ini

admin_tenant_name = service
admin_user = cinder 
admin_password = password

或者用下面命令

sed -i -e " s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/cinder/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; " /etc/cinder/api-paste.ini

同步数据库

cinder-manage db sync

重启服务

service cinder-api restart
service cinder-scheduler  restart
service cinder-volume restart

Nova

安装

apt-get -y install nova-api nova-cert nova-common \
nova-scheduler python-nova python-novaclient nova-consoleauth novnc nova-novncproxy

配置

编辑 /etc/nova/api-paste.ini

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 10.1.199.53
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dirname = /tmp/keystone-signing-nova

 

或者直接运行命令

sed -i -e " s/127.0.0.1/$MASTER/g; s/%SERVICE_TENANT_NAME%/service/g; s/%SERVICE_USER%/nova/g; s/%SERVICE_PASSWORD%/$SERVICE_PASSWORD/g; " /etc/nova/api-paste.ini

 

创建 /etc/nova/nova.conf 文件,直接copy下面的命令,运行就可以。

cat >/etc/nova/nova.conf <<EOF
[DEFAULT]

# MySQL Connection #
sql_connection=mysql://nova:$MYSQL_PASS@$MASTER/nova

# nova-scheduler #
rabbit_host=$MASTER
rabbit_password=$RABBIT_PASSWORD
scheduler_driver=nova.scheduler.simple.SimpleScheduler
#compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

# nova-api #
cc_host=$MASTER
auth_strategy=keystone
s3_host=$MASTER
ec2_host=$MASTER
nova_url=http://$MASTER:8774/v1.1/
ec2_url=http://$MASTER:8773/services/Cloud
keystone_ec2_url=http://$MASTER:5000/v2.0/ec2tokens
api_paste_config=/etc/nova/api-paste.ini
allow_admin_api=true
use_deprecated_auth=false
ec2_private_dns_show_ip=True
dmz_cidr=169.254.169.254/32
ec2_dmz_host=169.254.169.254
metadata_host=$MASTER
metadata_listen=0.0.0.0
enabled_apis=ec2,osapi_compute,metadata

# Networking #
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://$MASTER:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=$SERVICE_PASSWORD
quantum_admin_auth_url=http://$MASTER:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver  
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

# Compute #
#compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder.API

# Glance #
glance_api_servers=$MASTER:9292
image_service=nova.image.glance.GlanceImageService

# novnc #
novnc_enable=true
novncproxy_base_url=http://$MASTER:6080/vnc_auto.html
vncserver_proxyclient_address=$MASTER
vncserver_listen=$MASTER

# Misc #
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
#root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
rootwrap_config=/etc/nova/rootwrap.conf
#verbose=true
verbose=false
EOF

 

同步数据库

nova-manage db sync

重启服务

service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-novncproxy restart

Horizon

安装

apt-get -y install apache2 libapache2-mod-wsgi openstack-dashboard memcached python-memcache

删除ubuntu自带的theme 编辑 /etc/openstack-dashboard/local_settings.py

#Comment these lines
#Enable the Ubuntu theme if it is present.
#try:
#    from ubuntu_theme import *
#except ImportError:
#    pass

或者运行下面命令

sed -i '150,153s/^/#/' /etc/openstack-dashboard/local_settings.py

 

Reload 服务

service apache2 restart; service memcached restart

 

 

访问

http://10.1.199.53/horizon
user:admin
pass:password
或者
user:demo
pass:password

看一下中文的Dashboard,由于在控制节点没有安装计算服务,所以你是无法创建虚拟机。

 

计算节点

操作系统

操作系统最小化安装,ssh server就可以。

添加Folsom源

cat <<EOF >>/etc/apt/sources.list
deb  http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/folsom main
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main
EOF

运行下面命令

apt-get install ubuntu-cloud-keyring
apt-get update && apt-get -y dist-upgrade

网络

# cat /etc/network/interfaces 
# This file describes network interfaces avaiulable on your system
# and how to activate them. For more information, see interfaces(5).
# Modified by convert_static.sh.
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.1.199.6
hwaddress ether 00:25:90:2d:7a:42  
netmask 255.255.255.0
network 10.1.199.0
gateway 10.1.199.1
dns-search chenshake.com
dns-nameservers 8.8.8.8

# VMs Networks with OVS in tunnel mode
auto eth1
    iface eth1 inet static
    address 10.0.0.4
    netmask 255.255.255.0

 

重启网络

/etc/init.d/networking restart

 

IP转发

sed -i -r 's/^\s*#(net\.ipv4\.ip_forward=1.*)/\1/' /etc/sysctl.conf
echo 1 > /proc/sys/net/ipv4/ip_forward 

 

环境变量

cat >/root/novarc <<EOF
export CONTROLLER_IP=10.1.199.53
export MASTER="$(/sbin/ifconfig eth0 | awk '/inet addr/ {print $2}' | cut -f2 -d ":")"
export LOCAL_IP="$(/sbin/ifconfig eth1 | awk '/inet addr/ {print $2}' | cut -f2 -d ":")"
EOF

你根据你的情况,调整控制节点的IP

source novarc
echo "source novarc">>.bashrc

 

 

NTP

apt-get -y install ntp

设置

编辑 /etc/ntp.conf, 指向控制节点

server 10.1.199.53

 

或者运行命令

sed -i -e " s/server ntp.ubuntu.com/server $CONTROLLER_IP/g" /etc/ntp.conf

 

重启服务

service ntp restart

Hypervisor

apt-get install -y kvm libvirt-bin pm-utils

 

编辑 /etc/libvirt/qemu.conf ,添加下面内容

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc", "/dev/hpet","/dev/net/tun",
]

 

或者运行命令:这个地方用命令修改有点复杂,还没找到太好的办法。

cat <<EOF>>/etc/libvirt/qemu.conf
cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc", "/dev/hpet","/dev/net/tun",
]
EOF

 

删除默认 virtual bridge

virsh net-destroy default
virsh net-undefine default

允许迁移

编辑 /etc/libvirt/libvirtd.conf, 去掉这三行的注释

listen_tls = 0
listen_tcp = 1
auth_tcp = "none" 

 

或者运行下面命令

sed -i '/#listen_tls/s/#listen_tls/listen_tls/; /#listen_tcp/s/#listen_tcp/listen_tcp/; /#auth_tcp/s/#auth_tcp/auth_tcp/; /auth_tcp/s/sasl/none/'  /etc/libvirt/libvirtd.conf

 

编辑 /etc/init/libvirt-bin.conf

env libvirtd_opts="-d -l" 

或者使用命令

sed -i '/env libvirtd_opts/s/-d/-d -l/' /etc/init/libvirt-bin.conf

 

编辑 /etc/default/libvirt-bin

libvirtd_opts="-d -l"

 

或者使用命令

sed -i '/libvirtd_opts/s/-d/-d -l/' /etc/default/libvirt-bin

 

重启服务

service libvirt-bin restart

 

Open-vSwitch

apt-get install -y openvswitch-switch

 

配置bridge

ovs-vsctl add-br br-int

Quantum

apt-get -y install quantum-plugin-openvswitch-agent

 

编辑 /etc/quantum/quantum.conf , 修改和控制节点一样,直接从控制直接复制过来

scp root@$CONTROLLER_IP:/etc/quantum/quantum.conf /etc/quantum/quantum.conf

编辑 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

可以从控制节点copy过来,只需要修改local_IP就可以.

scp root@$CONTROLLER_IP:/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

 

修改 local_ip

sed -i 's/^local_ip.*$/local_ip = '$LOCAL_IP'/g' /etc/quantum/plugins/openvswitch//ovs_quantum_plugin.ini 

 

重启服务

service openvswitch-switch restart
service quantum-plugin-openvswitch-agent restart

 

Nova

apt-get -y install nova-compute-kvm novnc nova-novncproxy nova-api-metadata

 

编辑 /etc/nova/api-paste.ini

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 10.1.199.53
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dirname = /tmp/keystone-signing-nova

 

或者运行下面命令,直接从控制节点复制过来就可以。

scp root@$CONTROLLER_IP:/etc/nova/api-paste.ini /etc/nova/

 

编辑 /etc/nova/nova-compute.conf

[DEFAULT]
libvirt_type=kvm
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True

 

或者运行下面命令

cat > /etc/nova/nova-compute.conf <<EOF
[DEFAULT]
libvirt_type=kvm
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
EOF

 

编辑  /etc/nova/nova.conf。我们可以从控制节点copy过来修改

scp root@$CONTROLLER_IP:/etc/nova/nova.conf /etc/nova/nova.conf

修改下面内容

metadata_host=10.1.199.6
enabled_apis=metadata

# Compute #
compute_driver=libvirt.LibvirtDriver

# novnc #
novnc_enable=true
novncproxy_base_url=http://10.1.199.53:6080/vnc_auto.html
vncserver_proxyclient_address=10.1.199.6
vncserver_listen=10.1.199.6

 

可以使用命令

sed -i "/metadata_host/s/$CONTROLLER_IP/$MASTER/; s/^enabled_apis.*$/enabled_apis=metadata/g; s/#compute_driver/compute_driver/g; /vncserver_proxyclient_address/s/$CONTROLLER_IP/$MASTER/; /vncserver_listen/s/$CONTROLLER_IP/$MASTER/" /etc/nova/nova.conf

 

重启服务

service nova-novncproxy restart 
service nova-compute restart
service nova-api-metadata restart

这个时候,你就可以通过命令看到计算节点和控制节点。

快照22

创建VM

这个操作是控制节点运行, 由于quantum的脚本,已经为demo的用户创建了一个网络,所以我们就直接用demo的用户操作。

cat > /root/demo << EOF 
export OS_USERNAME=admin
export OS_TENANT_NAME=demo
export OS_PASSWORD=password
export OS_AUTH_URL=http://127.0.0.1:35357/v2.0/
export PS1="[\u@\h \W(demo)]\$ "
EOF

运行

. demo

创建虚拟机,你需要根据情况调整image id

nova keypair-add oskey > oskey.priv
chmod 600 oskey.priv
nova flavor-list
nova image-list
nova boot --flavor 2 --key_name oskey --image ea3ffba1-065e-483f-bfe2-c84184ee76be test1
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

这个时候虚拟机就创建好了。查看虚拟机的Ip

nova list
+--------------------------------------+-------+--------+-------------------+
| ID                                   | Name  | Status | Networks          |
+--------------------------------------+-------+--------+-------------------+
| e1425c3a-9930-4ff7-b8f8-fdb4de4e96d9 | test1 | ACTIVE | demo-net=10.5.5.3 |
+--------------------------------------+-------+--------+-------------------+

这个时候就可以ssh到虚拟机

ssh -i oskey.priv ubuntu@10.5.5.3

设置floating IP

$ quantum floatingip-create ext_net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 10.1.199.131                         |
| floating_network_id | 92c05466-6f80-4e5f-bbc3-59987df8d489 |
| id                  | afb220f3-ecfc-4919-b6de-eb636b796933 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | c336b7f576c842b48471e1cc6072ddcb     |
+---------------------+--------------------------------------+
[root@node53 ~(demo)]$ quantum port-list
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 218751e0-6d36-4a43-8f9b-0dd07e5f6964 |      | fa:16:3e:c1:b8:f6 | {"subnet_id": "7e158f08-f24d-4fe0-bf5f-570c3c4ac5c7", "ip_address": "10.5.5.2"}     |
| 61f55d19-9eed-46af-ad9f-c071fd432cc8 |      | fa:16:3e:a1:6d:d2 | {"subnet_id": "7e158f08-f24d-4fe0-bf5f-570c3c4ac5c7", "ip_address": "10.5.5.1"}     |
| 73fd2420-b759-48c1-8c82-d8d9aeb3f757 |      | fa:16:3e:1b:70:2a | {"subnet_id": "7e158f08-f24d-4fe0-bf5f-570c3c4ac5c7", "ip_address": "10.5.5.3"}     |
| 8051fc93-d6fa-458e-bc05-5f10da2f5eb0 |      | fa:16:3e:b4:c5:dc | {"subnet_id": "ef65a5bd-39d2-496f-a042-7234b5b8956e", "ip_address": "10.1.199.131"} |
| fba8f518-45ad-4b81-a520-8a739eddb848 |      | fa:16:3e:6f:9b:fb | {"subnet_id": "ef65a5bd-39d2-496f-a042-7234b5b8956e", "ip_address": "10.1.199.130"} |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
[root@node53 ~(demo)]$ quantum floatingip-associate afb220f3-ecfc-4919-b6de-eb636b796933 73fd2420-b759-48c1-8c82-d8d9aeb3f757
Associated floatingip afb220f3-ecfc-4919-b6de-eb636b796933

 

目前存在的问题

ssh到虚拟机里,还是无法访问外网。

 

附录

由于上面已经使用quantum的脚本创建网络,所以这个时候,你就可以使用demo的账号创建虚拟机。下面是通过手工的方式创建网络。

网络

这也是难点,在控制节点执行下面的命令,我尽可能把命令变得简单。

创建一个租户,用户

keystone tenant-create --name $OS_TEST_TENANT
tenant=$(keystone tenant-list | awk '/'"$OS_TEST_TENANT"'/ {print $2}')
role=$(keystone role-list | awk '/Member/ {print $2}')
keystone user-create --name=$OS_TEST_USER --pass=$OS_PASSWORD --tenant-id $tenant
user=$(keystone user-list | awk '/'"$OS_TEST_USER"'/ {print $2}')
keystone user-role-add --tenant-id $tenant --user-id $user --role-id $role

 

为租户创建网络

quantum net-create --tenant-id $tenant $OS_TEST_NET

为租户网络创建子网

quantum subnet-create --tenant-id $tenant $OS_TEST_NET $OS_TEST_SUBNET

 

为租户创建路由

quantum router-create --tenant-id $tenant $OS_TEST_ROUTER

 

为子网添加路由

subnet=$(quantum net-list | awk '/'"$OS_TEST_NET"'/ {print $6}')
router=$(quantum router-list | awk '/'"$OS_TEST_ROUTER"'/ {print $2}')
quantum router-interface-add $router $subnet

为service租户创建一个外部网络

service=$(keystone tenant-list | awk '/service/ {print $2}') 
quantum net-create --tenant-id $service ext_net --router:external=True

 

编辑 etc/quantum/l3_agent.ini

gateway_external_net_id = $id_of_ext_net
router_id = $your_router_id

可以考虑用下面命令实现修改

ext_net=$(quantum net-list | awk '/ext_net/ {print $2}')
sed -i -e " s/# router_id =/router_id = $router/g; s/# gateway_external_net_id =/gateway_external_net_id = $ext_net/g;" /etc/quantum/l3_agent.ini 

 

重启L3agent

service quantum-l3-agent restart

创建Floating Ip网段

quantum subnet-create --tenant-id $service --allocation-pool start=10.1.199.102,end=10.1.199.126 --gateway 10.1.199.1 ext_net 10.1.199.100/24 --enable_dhcp=False

 

设置外部网络的路由

quantum router-gateway-set $router $ext_net

 

设置IP地址

quantum port-list -- --device_id $router --device_owner network:router_gateway

 

第一个虚拟机

由于Dashboard不支持Floating IP关联,所以只能命令行下操作,下面切换到上面创建的用户环境下。参考novarc进行调整

cat > /root/chenshakerc << EOF 
export OS_USERNAME=chenshake
export OS_TENANT_NAME=bank
export OS_PASSWORD=password
export OS_AUTH_URL=http://127.0.0.1:35357/v2.0/
export PS1="[\u@\h \W(chenshake)]\$ "
EOF

 

切换到用户下

. chenshakerc

创建虚拟机,你需要根据情况调整image id

nova keypair-add oskey > oskey.priv
chmod 600 oskey.priv
nova flavor-list
nova image-list
nova boot --flavor 2 --key_name oskey --image ea3ffba1-065e-483f-bfe2-c84184ee76be test1
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

 

这个时候,你可以通过下面nova list

快照23

申请Floating IP

quantum floatingip-create ext_net

快照24

查看port,通过IP,可以知道VM的port ID

quantum port-list

 

Floating IP关联到虚拟机,把上面的两个id替换上就ok。

quantum floatingip-associate $put_id_floating_ip $put_id_vm_port

 

查看IP情况

quantum floatingip-list

 

 

 

  82 Responses to “Ubuntu12.04 OpenStack Folsom 安装(GRE模式)”

  1. 多谢陈老师的调查。 能给解释一下 Folsom 中新模块的配置原理吗, 和Essex差很多啊…

    # Networking #
    network_api_class=nova.network.quantumv2.api.API
    quantum__url=http://$MASTER:9696
    quantum_auth_strategy=keystone
    quantum_admin_tenant=service
    quantum_admin_username=quantum
    quantum_admin_password=password
    quantum_admin_auth_url=http://$MASTER:35357/v2.0
    firewall_driver=nova.virt.firewall.NoopFirewallDriver

    # Cinder #
    volume_api_class=cinder.volume.api.API

    • 这个是因为quantum的原因,如果你继续用nova network,其实还是一样的。 cinder,是替换nova volume。别的差不多

  2. 有好几个bug 最明显 nova配置了 api没有启动volume的功能 导致cinder无法使用 还有ovs网桥出现了空网桥

    • 没写完了,那个老外的文档,也在修改。等国庆回去重新整理一遍吧。quantum比较复杂,估计需要一点时间消化。

  3. 你好,我安装完quantum之后,虚拟机倒是可以正常启动和获得ip,但是ping不通,您觉得可能是什么问题造成的?

  4. 你好,我再问个问题,谢谢,我安装了quantum,创建了一个subnet,并且创建了一个router,将这个subnet连接到了router上,虚拟机启动可以分到ip,可以通过ip netns exec qrouter-$router-id ping fixed-ip ping通实例,但是使用ssh连接不上,查看/var/lib/nova/instances/instance-id里面的console.log可以发现是因为实例没有访问到metadata service,出现了如下错误:util.py[WARNING]: ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed [13/120s]:url error[Errno 101] Network is unreachable.
    执行ip netns exec qrouter $router-id iptables –list -t nat可以看到主机ip和169.254.169.254之间的匹配
    /etc/quantum/l3_agent.ini中的metadata_ip已经修改了。一开始我认为是无法通过router连接到metadata_host,但是如果再router-gateway-set一个外网的话,会出现下面的错误
    ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed [13/120s]:url error[Errno 101]No route to host.
    我是把nova和quantum的所有组件都装在了一台机器上
    您感觉是哪里的问题,谢谢!

    • 不知道我理解是否正确,我感觉使用quantum后,你必须用2个节点来实现。就是一个控制节点,一个计算节点。

      • 你好,我想问一下quantum里面的router是一个怎样的组件,似乎和linux network namespace有关,理论上router应该是和子网相关的,如果和linux network namespace相关的话那岂不是和主机相关了。我现在两个计算节点上的虚拟机虽然处于一个子网中,但是他们之间ping不通,您觉得是哪里的问题?

    • 安装完folsom版本后,创建了2个instance 正常(同一台计算节点上),不能自动获得ip,手动配置ip之后,instance之间能够互相ping通。但不能ping通网关192.168.1.1 也不能ping通外网。
      console.log如下:

      Starting network…
      258 udhcpc (v1.18.5) started
      259 Sending discover…
      260 Sending discover…
      261 Sending discover…
      262 No lease, failing
      263 WARN: /etc/rc3.d/S40-network failed
      264 cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id
      265 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      266 cloud-setup: failed 1/30: up 10.62. request failed
      267 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      268 cloud-setup: failed 2/30: up 11.63. request failed
      269 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      270 cloud-setup: failed 3/30: up 12.63. request failed
      271 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      272 cloud-setup: failed 4/30: up 13.64. request failed
      273 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      274 cloud-setup: failed 5/30: up 14.65. request failed
      275 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      276 cloud-setup: failed 6/30: up 15.65. request failed
      277 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      278 cloud-setup: failed 7/30: up 16.66. request failed
      279 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      280 cloud-setup: failed 8/30: up 17.66. request failed
      281 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      282 cloud-setup: failed 9/30: up 18.67. request failed
      283 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      284 cloud-setup: failed 10/30: up 19.68. request failed
      285 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      286 cloud-setup: failed 11/30: up 20.69. request failed
      287 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      288 cloud-setup: failed 12/30: up 21.69. request failed
      289 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      290 cloud-setup: failed 13/30: up 22.70. request failed
      291 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      292 cloud-setup: failed 14/30: up 23.71. request failed
      293 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      294 cloud-setup: failed 15/30: up 24.72. request failed
      295 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      296 cloud-setup: failed 16/30: up 25.73. request failed
      297 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      298 cloud-setup: failed 17/30: up 26.74. request failed
      299 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      300 cloud-setup: failed 18/30: up 27.74. request failed
      301 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      302 cloud-setup: failed 19/30: up 28.75. request failed
      303 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      304 cloud-setup: failed 20/30: up 29.76. request failed
      305 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      306 cloud-setup: failed 21/30: up 30.77. request failed
      307 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      308 cloud-setup: failed 22/30: up 31.78. request failed
      309 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      310 cloud-setup: failed 23/30: up 32.79. request failed
      311 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      312 cloud-setup: failed 24/30: up 33.80. request failed
      313 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      314 cloud-setup: failed 25/30: up 34.80. request failed
      315 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      316 cloud-setup: failed 26/30: up 35.82. request failed
      317 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      318 cloud-setup: failed 27/30: up 36.83. request failed
      319 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      320 cloud-setup: failed 28/30: up 37.84. request failed
      321 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      322 cloud-setup: failed 29/30: up 38.84. request failed
      323 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      324 cloud-setup: failed 30/30: up 39.85. request failed
      325 cloud-setup: after 30 fails, debugging
      326 cloud-setup: running debug (30 tries reached)
      327 ############ debug start ##############
      328 ### /etc/rc.d/init.d/sshd start
      329 /etc/rc3.d/S45-cloud-setup: line 66: /etc/rc.d/init.d/sshd: not found
      330 route: fscanf
      331 ### ifconfig -a
      332 eth0 Link encap:Ethernet HWaddr FA:16:3E:90:A5:69
      333 inet6 addr: fe80::f816:3eff:fe90:a569/64 Scope:Link
      334 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
      335 RX packets:11 errors:0 dropped:0 overruns:0 frame:0
      338 RX bytes:1582 (1.5 KiB) TX bytes:902 (902.0 B)
      339
      340 lo Link encap:Local Loopback
      341 inet addr:127.0.0.1 Mask:255.0.0.0
      342 inet6 addr: ::1/128 Scope:Host
      343 UP LOOPBACK RUNNING MTU:16436 Metric:1
      344 RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      345 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
      346 collisions:0 txqueuelen:0
      347 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
      348
      349 ### route -n
      350 Kernel IP routing table
      351 Destination Gateway Genmask Flags Metric Ref Use Iface
      352 route: fscanf
      353 ### cat /etc/resolv.conf
      354 cat: can’t open ‘/etc/resolv.conf’: No such file or directory
      355 ### gateway not found
      356 /etc/rc3.d/S45-cloud-setup: line 66: can’t open /etc/resolv.conf: no such file
      357 ### pinging nameservers
      358 ### uname -a
      360 ### lsmod
      361 Module Size Used by Not tainted
      362 vfat 17585 0
      363 fat 61475 1 vfat
      364 isofs 40253 0
      365 ip_tables 27473 0
      366 x_tables 29846 1 ip_tables
      367 pcnet32 42078 0
      368 8139cp 27412 0
      369 ne2k_pci 13691 0
      370 8390 18856 1 ne2k_pci
      371 e1000 108573 0
      372 acpiphp 24080 0
      373 ### dmesg | tail
      374 [ 1.526326] acpiphp: Slot [29] registered
      375 [ 1.526355] acpiphp: Slot [30] registered
      376 [ 1.526384] acpiphp: Slot [31] registered
      377 [ 1.535228] e1000: Intel(R) PRO/1000 Network Driver – version 7.3.21-k8-NAPI
      378 [ 1.535233] e1000: Copyright (c) 1999-2006 Intel Corporation.
      379 [ 1.539191] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker
      394 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526144] acpiphp: Slot [23] registered
      395 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526174] acpiphp: Slot [24] registered
      396 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526203] acpiphp: Slot [25] registered
      397 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526232] acpiphp: Slot [26] registered
      398 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526262] acpiphp: Slot [27] registered
      399 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526291] acpiphp: Slot [28] registered
      400 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526326] acpiphp: Slot [29] registered
      401 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526355] acpiphp: Slot [30] registered
      402 Oct 29 01:45:29 cirros kern.info kernel: [ 1.526384] acpiphp: Slot [31] registered
      410 ############ debug end ##############
      411 cloud-setup: failed to read iid from metadata. tried 30
      412 WARN: /etc/rc3.d/S45-cloud-setup failed
      413 Starting dropbear sshd: generating rsa key… generating dsa key… OK
      414 ===== cloud-final: system completely up in 41.30 seconds ====
      415 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      416 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      417 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      418 instance-id:
      419 public-ipv4:
      420 local-ipv4 :
      421 wget: can’t connect to remote host (169.254.169.254): Network is unreachable
      422 cloud-userdata: failed to read instance id
      423 WARN: /etc/rc3.d/S99-cloud-userdata failed
      424 ____ ____ ____
      425 / __/ __ ____ ____ / __ \/ __/
      426 / /__ / // __// __// /_/ /\ \
      427 \___//_//_/ /_/ \____/___/
      428 http://launchpad.net/cirros
      429
      430 ^M
      431 login as ‘cirros’ user. default password: ‘cubswin:)’. use ‘sudo’ for

    • 请教各位兄台,这个问题解决没有?
      metadata service,出现了如下错误:util.py[WARNING]: ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed [13/120s]:url error[Errno 101] Network is unreachable.

      我也是卡在这个地方。
      修改nova.conf没有效果;

    • 请教各位兄台,这个问题解决没有?
      metadata service,出现了如下错误:util.py[WARNING]: ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed [13/120s]:url eror[Errno 101] Network is unreachable.

      我也碰到类似的问题;

      • 我碰到的错误略有区别:
        util.py[WARNING]: ‘http://169.254.169.254/20090404/metadata/instanceid’ failed [50/120s]: url error [timed out]

        • 在folsom版本中,quantum的官方文档中已经说了,在overlap ip的情况是不支持metadata service的,之后就没再搞

  5. 一个小bug,
    cat > /etc/nova/api-paste.ini < /etc/nova/nova-compute.conf <<EOF

  6. 网络通了吗,位于同一个子网内并且位于两台不同主机上的虚拟机可以互通了吗?

  7. 您好,我想问一些关于keystone的问题。
    为什么在安装glance和quantum这些服务的时候要修改api-paste.ini里面的admin_tenant_name和admin_user_name这些地方呢?
    在安装keystone的时候会修改/etc/keytone/keystone.conf里面的token,这个是为什么呢,是不是跟我们在环境变量里谢的SERVICE_TOKEN有关系,目的是为了验证keystone?
    每个用户在访问服务比如glance的时候好像会产生一个token,保存在keystone的数据库中,这个token是如何产生的呢,和上面的SERVICE_TOKEN有关系吗?是不是上面的SERVICE_TOKEN和用户名密码运行某些算法产生的?
    谢谢!!

    • 1:nova,glance,quantum,都需要使用keystone作为验证。所以需要一个服务账户去请求。
      2:对keystone的请求,需要一个token,例如你运行keystone的命令,都需要用这个token。
      3:这个token和service token是不一样的。glance请求keystone,会获得一个随机的token。

  8. 忙了近两周,总算从Essex升级到Folsom了,不过没有升quantum,主要是不知道如何在保证现有的VM实例正常运行的情况下将原来的bridge升级到openswitch,一直没能找到这方面的资料。沙克兄的这篇文档对新安装有用,但对升级来说反而会带来麻烦,计算节点的是按照quantum配置的,即便不改网络设置,这个配置也会在创建实例的时候也会去加bridge和虚拟网卡,而且建立不了实例。

    此外,按照Emilien Macchi的说法,配置云控制节点时需要删除/etc/nova/api-paste.ini的“volume”,他的原文是这样的:
    You should also delete each composite with “volume”.
    We can do that manually or with this command :
    sed -i ‘/volume/d’ /etc/nova/api-paste.ini

    不知为何译文中没有这几句话和相应的操作?根据我的实验,这对配置cinder很重要,如果不做这个操作,就会导致nona-api与cinder-api的地址发生冲突,cinder-api启动不了,会显示地址已经在用的错误信息,而且在hashboard中看不到Volumes和Quotas。

    • 你走在前面。
      1:nova netwok改成quantum,这是高难度的活,估计还不见得会有文档。因为目前quantum还不是生产ready。所以很难升级到quantum,日后估计要考虑迁移。
      2:还没来得及测试cinder,我看到这句话,不过我想如果是bug,ubuntu官方会处理。目前的难点在quantum。
      3:quantum目前缺少很多功能,所以基本就只能测试。

      • 嗯,看来不升级(现在也无法升级)到quantum是对的。有时间搭一个测试环境看看quantum到底怎么样,这次就不折腾了。

        • lc兄,你好:
          从E版本升级到F版本 ,请求指导。我们也是生产环境(E版本)。
          请问,您是用的源码安装的吧,是github上哪个分支,还是launchpad上的代码?
          QQ 377204671 ,认识一下,求交流。 呵呵~

          • 抱歉,才看到您的留言。

            我们没有用源码安装,而是如沙克兄所介绍的那样在/etc/apt/sources.list文件的后面增加如下两行:
            deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/folsom main
            deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main

            然后再执行:
            apt-get install ubuntu-cloud-keyring
            apt-get update && apt-get -y dist-upgrade

            当然,还需要修改有关的配置文件等才能让系统正常运行。

            我已经加了您的QQ号,也希望能相互交流学习。

          • 多多交流。目前安装基本没啥问题。quantum部分就是虚拟机无法访问外网,正在解决中。

  9. •2012年11月29日:经过多次重复安装,基本已经实现虚拟机的访问。不过目前虚拟机还是无法访问外部网络,估计还是quantum的bug,今天也是Folsom发布第一个补丁包,希望可以在ubuntu集成补丁包后,修复所有相关的bug。目前文档已经基本可用。

    沙克您好!不知这个问题是否已经解决了?我现在就卡在这个位置了

  10. 我也是vm到不了外网,怀疑是l3agent的问题,不知道你是否也有这个问题。

    刚刚提交了一个bug,呵呵

    https://bugs.launchpad.net/quantum/+bug/1092763

  11. quantum-networking.sh中,与EXT_NET有关的IP都应该是Public或者Internet Access的IP,而不是OpenStack的Management的IP,Management的IP的traffic只在内部,很多老外的文档都是这么写的,否者VM无法访问Internet,个人理解

  12. 陈老师:
    quantum net-create –tenant_id 6c157da7112c492080e73c66d3122e49 demo-net –provider:network_type gre –provider:segmentation_id 1
    An unknown exception occurred.
    不知那错了!

    • 我文档里没这个命令啊。

      • quantum-networking.sh中的脚本,我运行出错,就复制了一下

        这个原因是我没改“L3agent 有一个bug,需要手工修复”造成的

        改了就好了,谢谢

  13. +————————————–+———+——–+———-+
    | ID | Name | Status | Networks |
    +————————————–+———+——–+———-+
    | 4c8a4dda-0944-47b3-8bb8-b054510d6964 | cloud01 | ERROR | |
    +————————————–+———+——–+———-+

    使用gen_keystone_data.sh 和 quantum-networking.sh

    • EXT_GW_IP, 这其实是控制节点的eth2的IP地址,不过这个IP地址,不是通过/etc/network/interface 设置,而是通过这个脚本设置。运行完脚本,你就可以ping通这个IP。

      我没Ping通???????

  14. 你好,您有没有遇到过在计算节点上nova-compute启动正常,但是在控制节点上nova-manage service list看到的这个计算的节点的状态却是XXX这种情况?

    • 你有没有安装NTP服务?我遇到此类问题都是节点之前的时间不一致造成的

    • 这个情况听说过。不过想解决和定位问题的原因,就只能看源码。我这边同事在研究,出现这种情况的原因实在是太多了。

  15. 是的,我前几天看了源码,它那个状态就是根据时间来计算的。我的问题就是因为时间不一致造成,我在改配置的时候配ntp服务器的配错了

  16. 陈老师,我在安计算节点的时候执行service nova-api-metadata restart,提示没有安装nova-api-metadata ,安完之后 nova-manage service list 那个compute显示的是三个叉,但是我把nova-api-metadata卸掉之后,就可以显示笑脸~~这是为什么呀?

    • 这是文档没改干净的一个地方。现在在计算节点,不需要安装nova-api-metadata,但是我下面启动服务还没去掉。改天我好好校对一遍。

  17. 其实有一个地方,创建完router和external network,修改/etc/quantum/l3_agent.ini,修改 router 和 external network时,配置项应该是:gateway_external_network_id,而不是:gateway_external_net_id。

    这个地方很多人没有注意,老外的安装文档也是这么写的。

    因为代码中如果找不到gateway_external_net_id会自动到quantum查询,所以才没有出错。

    • 这个地方,其实我已经注意到。并且已经修改了。英文的原文文档已经修改了。

      sed -i -e ” s/# router_id =/router_id = $router/g; s/# gateway_external_net_id =/gateway_external_network_id = $ext_net/g;” /etc/quantum/l3_agent.ini

      不过还是那个问题,虚拟机无法访问外网。

  18. sed -i ‘/env libvirtd_opts/s/-d/-d –l/’ /etc/init/libvirt-bin.conf

    有误“–l”,使用非英文的“-”

  19. 有几点我不是很赞同, GRE 模式控制节点只需要2块网卡 一块控制网络 一块外网 如果不需要外网则一块足以,对于计算节点来说不考虑性能 一块网卡足以。local_ip 设置为控制网络IP就OK了。具体原因可以看一下tunneling 的实现。
    虚拟机无法访问外网的问题,在确保L3配置正确的情况,请检查外网网卡是否加入br-ex 中并配置为promisc 模式。确保vm ping 通fixed_ip 中的gateway ,ping 同router中的gateway
    在namespace 开启的情况下想要PING 通虚拟机 需要在host手动添加fixed_ip 网段到router gateway的路由
    另外关于metadata的问题,1 VM 能ping 同 api-server 2. api-server ping通VM ( 保证双向连接都能到达) 3 overlapping不能启用,nova的metadata 模型目前仍然基于 1VM 对应1FIXED_IP的关系

    • 春节回去再好好调试一下。文档其实我看了n遍。没有启用namespace,网卡肯定是混杂模式。

  20. 陈老师你好,我用ubuntu12.10按照你得文档配置好,建立实例总是显示状态是error
    root@ubuntustack:~# nova list
    Please enter password for encrypted keyring:
    +————————————–+——-+——–+———-+
    | ID | Name | Status | Networks |
    +————————————–+——-+——–+———-+
    | eee16062-665b-4365-9a77-4ca8431de488 | test2 | ERROR | |
    +————————————–+——-+——–+———-+
    root@ubuntustack:~# show nova test2
    The program ‘show’ is currently not installed. You can install it by typing:
    apt-get install nmh
    root@ubuntustack:~# nova show test2
    Please enter password for encrypted keyring:
    +————————————-+———————————————————————————+
    | Property | Value |
    +————————————-+———————————————————————————+
    | OS-DCF:diskConfig | MANUAL |
    | OS-EXT-SRV-ATTR:host | None |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | None |
    | OS-EXT-SRV-ATTR:instance_name | instance-00000006 |
    | OS-EXT-STS:power_state | 0 |
    | OS-EXT-STS:task_state | None |
    | OS-EXT-STS:vm_state | error |
    | accessIPv4 | |
    | accessIPv6 | |
    | config_drive | |
    | created | 2013-02-17T03:47:31Z |
    | fault | {u’message’: u’NoValidHost’, u’code’: 500, u’created’: u’2013-02-17T03:47:31Z’} |
    | flavor | m1.tiny (6) |
    | hostId | |
    | id | eee16062-665b-4365-9a77-4ca8431de488 |
    | image | Ubuntu 12.04 cloudimg amd64 (659b16b8-834a-4086-a10d-6504ded5725d) |
    | key_name | None |
    | metadata | {} |
    | name | test2 |
    | security_groups | [{u’name’: u’default’}] |
    | status | ERROR |
    | tenant_id | da65bcbc0ec4428fb4fd2284b8b8e0a4 |
    | updated | 2013-02-17T03:47:32Z |
    | user_id | 27edd7f0102f41a6910fc9df2680c1f8 |
    +————————————-+———————————————————————————+
    我想知道是什么原因,或者哪里可以去查错误信息,谢谢!

  21. 陈沙克老师,我按照您的方法装了,出现了一下问题
    1.虚拟机PING都PING不同,(我添加了TCP和ICMP规则了)。
    2.虽然不能PING和SSH但是可以 VNC连接。
    3.如您上面所说虚拟机不能上网。
    4.如果我不添加网段的话,我放出的虚拟机就是直接可以上网的IP。
    在这种情况下怎么PING通虚拟机呢?

  22. 我装好的虚机机可以访问外网,刚开始不能,删除default gateway后就可以了。你试试
    下面是我的network节点的route
    root@hp4u:~# route
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    50.50.50.0 * 255.255.255.0 U 0 0 0 tap39d7db73-9a
    50.50.50.0 * 255.255.255.0 U 0 0 0 qr-33f834d3-22
    100.10.10.0 * 255.255.255.0 U 0 0 0 eth0
    100.100.100.0 * 255.255.255.0 U 0 0 0 qg-4a649e6b-e1
    100.100.100.0 * 255.255.255.0 U 0 0 0 br-ex
    192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0

  23. 请问VM不能访问外网的问题是否已经解决?如果哪位大牛搞定了,请共享下解决方案:)

  24. GRE并不是都需要3块网卡的,实验环境,3台机子均1块网卡都是可以的,当然,都是2块也是可以的

  25. 我在家都安装成功 网络也全部ok了但是在公司就是不行 实验环境有些不一样 网络都重新建了配置文件也差不多,我用 openstack RDO都没有问题,也是不能取到ip貌似网桥都没有自动安装好 准备开debug好好看看我的G版本

 Leave a Reply

(required)

(required)