Aug 262014
 

搞IaaS的,有一点是比较讽刺的,开发,测试都是用硬件,并且也饱受设备的痛苦。我们本来是为了帮助用户解决使用资源的各种问题,结果我们自己还饱受折磨。

这个问题其实由来已久。2007年的时候,我当时测试ESX3.0,其实这个东西真没多复杂,就是需要硬件,当年vmware workstation根本就不支持。当年的ESX的培训,都去美国实验室,够恶心了吧。后来解决了worstation安装ESX的问题后,vmware的ESX就真正在企业普及起来。普及的速度真的很快啊。

今天Openstack非常热,Openstack要很好推广,那么培训就是必要的。只有培训推广的好,企业才能有信心去使用Openstack。能不能用Openstack培训Openstack,真的很关键。

UnitedStack团队推出了共有云,UOS,这下子有机会大家都有一个相同的环境。说实话,以前所谓测试IaaS,基本都是点击几下,没真正变成用户,没真正帮助我解决问题。我希望日后可以用这个Openstack下培训Openstack,让大家能认识到IaaS,其实有很多好玩的玩法,可以帮助你改变以前很多无法解决的问题。当然需要大家一起想想。

我子所以那么关注培训,其实是因为我自己的IT的技能很大部分都是通过参加IT培训获得的,所以也算是比较了解这个行业。

这次安装Openstack,是参考国外文档:https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst

我已经把所有安装的配置文件放到github下,大家安装过程,如有一位,那么就直接参考一下

https://github.com/shake/Openstak-on-openstack

我希望任何人,都可以利用UOS,重复我文档的所有操作,可以实现下面的目标

  1. 搭建一套完整的Openstack
  2. 基于Neutron创建虚拟机
  3. 虚拟机可以访问公网
  4. Horizon的功能都可以正常使用,包括迁移

看看UOS生成的网络拓扑图,应该可以改的更加好看,大家多去给他们提提意见。

Snap12

 

基本信息

network-topo

  管理网络(10.0.0.0/24) 虚拟机通讯网络(10.0.1.0/24) 外部网络(192.168.100.0/24)
控制节点 eth0(10.0.0.11)   eth1
(192.168.100.11)
网络节点 eth0(10.0.0.21) eth1(10.0.1.21) eth2(192.168.100.21)
计算节点 eth0(10.0.0.31) eth1(10.0.1.21)  
       

文档很清楚,

  1. 网络节点,需要3块网卡。
  2. 控制节点和网络节点,需要外部网络,就是需要所谓的公网的IP
  3. 计算节点是不需要公网IP
  4. 所有的虚拟机访问公网,都是需要经过网络节点。
  5. 192.168.100.0,就相当于公网的IP地址段

UOS网络

其实最麻烦的就是网络,把网络准备好后,那么剩下就是对着文档copy和粘贴。

  1. 创建一个路由: router
  2. 创建一个Openstack安全组,后面的3个虚拟机都是使用这个安全组,避免日后互相影响。
  3. 申请一个公网的IP,1M带就足够,绑定路由器
  4. 在网络里创建3个网络:外部网络,虚拟机通讯网络,管理网络,其中外部网络连接路由器
  5. 创建7块网卡,给网卡设置固定IP

网络

Snap8

Snap9

网卡

Snap11

这是UOS的一个特色,原理不复杂,不过使用的时候,会让你感觉很方便。目前UOS创建虚拟机的时候,还不能指定自己创建的网卡。只能是创建完成后,你删掉以前的网卡,添加自己定制的网络。

安全组

Snap10

创建虚拟机

我们需要创建3台虚拟机,为了方便,虚拟机的名字,都是固定的。

  • controller
  • network
  • compute1

Snap7

我采用密钥登录

Snap8

目前还不支持创建虚拟机的时候,选择自己创建的网卡,只能创建完成后,把网卡删除掉,添加自己需要的网卡

Snap9

添加网卡

Snap10

看看添加完毕的效果

Snap11

 

访问虚拟机

访问虚拟机有两种办法,一个就是通过路由器的端口映射,一个就是通过vpn,在我们的实验中,通过vpn的方式是最好,后面的所有操作都非常方便。

 

控制节点

 

controller

上面需要的组件,已经很清楚。

基础组件

升级内核

apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade

时间服务器,很多问题都是由于时间不同步造成。

apt-get install -y ntp

MySQL

apt-get install -y mysql-server python-mysqldb

修改 /etc/mysql/my.cnf

bind-address = 10.0.0.11

[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

重启mysql

service mysql restart

安全设置

mysql_install_db
mysql_secure_installation

消息队列RabbitMQ

apt-get install -y rabbitmq-server

keystone

安装keystone

apt-get install -y keystone

创建keystone数据库,都是通过 mysql –u root –p 进入

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

exit;

删除sqllite数据库

rm /var/lib/keystone/keystone.db

编辑 /etc/keystone/keystone.conf

connection = mysql://keystone:KEYSTONE_DBPASS@10.0.0.11/keystone

[DEFAULT]
admin_token=ADMIN
log_dir=/var/log/keystone

初始化keystone数据库

service keystone restart
keystone-manage db_sync

设置环境变量

export OS_SERVICE_TOKEN=ADMIN
export OS_SERVICE_ENDPOINT=http://10.0.0.11:35357/v2.0

创建管理员权力的用户

keystone user-create --name=admin --pass=admin_pass --email=admin@domain.com
keystone role-create --name=admin
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin

创建普通用户

keystone user-create --name=demo --pass=demo_pass --email=demo@domain.com
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo

创建 service 租户

keystone tenant-create --name=service --description="Service Tenant"

定义服务的API的endpoint

 

keystone service-create --name=keystone --type=identity --description="OpenStack Identity"

创建endpoint

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://192.168.100.11:5000/v2.0 \
--internalurl=http://10.0.0.11:5000/v2.0 \
--adminurl=http://10.0.0.11:35357/v2.0

检测keystone

通过下面命令检查keystone的初始化是否正常

设置环境变量,创建creds 和 admin_creds 两个文件

cat <<EOF >>/root/creds
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_AUTH_URL="http://192.168.100.11:5000/v2.0/"
EOF
cat <<EOF >>/root/admin_creds
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://10.0.0.11:35357/v2.0
EOF

检测

设置环境变量才能进行下面操作

source creds

这样就可以

root@controller:~# keystone user-list
+----------------------------------+-------+---------+------------------+
|                id                |  name | enabled |      email       |
+----------------------------------+-------+---------+------------------+
| 6f8bcafd62ec4e23ab2be28016829f91 | admin |   True  | admin@domain.com |
| 66713a75b7c14f73a1c5a015241f5826 |  demo |   True  | demo@domain.com  |
+----------------------------------+-------+---------+------------------+
root@controller:~# keystone role-list
+----------------------------------+----------+
|                id                |   name   |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| cd8dec7752d24a028f95657556f7573d |  admin   |
+----------------------------------+----------+
root@controller:~# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| efc81990ab4c433f94573e2e0fcf08c3 |  admin  |   True  |
| be10dc11d4034b389bef8bbcec657f6f |   demo  |   True  |
| cb45c886bc094f65940ba29d79eab8aa | service |   True  |
+----------------------------------+---------+---------+

查看日志

日志在/var/log/keystone/ 下,先清空日志,看看日志是否还有错误信息.

echo "" > /var/log/keystone/keystone-all.log
echo "" > /var/log/keystone/keystone-manage.log
tail  /var/log/keystone/*

Glance

Openstack组件安装,都比较类似。

apt-get install -y glance python-glanceclient

创建数据库 mysql –u root –p

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

exit;

keystone创建glance用户和服务

keystone user-create --name=glance --pass=service_pass --email=glance@domain.com
keystone user-role-add --user=glance --tenant=service --role=admin

设置endpoint

keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://192.168.100.11:9292 \
--internalurl=http://10.0.0.11:9292 \
--adminurl=http://10.0.0.11:9292

 

编辑 /etc/glance/glance-api.conf

[database]
connection = mysql://glance:GLANCE_DBPASS@10.0.0.11/glance

[DEFAULT]
rpc_backend = rabbit
rabbit_host = 10.0.0.11

[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass

[paste_deploy]
flavor = keystone

编辑 /etc/glance/glance-registry.conf

[database]
# The file name to use with SQLite (string value)
#sqlite_db = /var/lib/glance/glance.sqlite
connection = mysql://glance:GLANCE_DBPASS@10.0.0.11/glance


[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = service_pass

[paste_deploy]
flavor = keystone

重启服务

service glance-api restart; service glance-registry restart

初始化glance数据库

glance-manage db_sync

上传镜像

source creds
glance image-create --name "cirros-0.3.2-x86_64" --is-public true \
--container-format bare --disk-format qcow2 \
--location http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

查看镜像

# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| d7a6d71d-4222-44f4-82d0-49c14ba19676 | cirros-0.3.2-x86_64 | qcow2       | bare             | 13167616 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+

检测log

root@controller:~# tail /var/log/glance/*
==> /var/log/glance/api.log <==
2014-09-02 07:07:12.315 2946 WARNING glance.store.base [-] Failed to configure store correctly:
 Store sheepdog could not be configured correctly. Reason:
 Error in store configuration: [Errno 2] No such file or directory Disabling add method.
2014-09-02 07:07:12.316 2946 WARNING glance.store [-] Deprecated: glance.store.
sheepdog.Store not found in `known_store`. 
Stores need to be explicitly enabled in the configuration file.

你会发现log里有类似的所谓错误,这个不是问题。希望glance改进一下这个地方的log。不然让很多新手很郁闷。

 

Nova

安装软件

apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler python-novaclient

创建nova 数据库 mysql –u root –p

CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

exit;

配置keystone

keystone user-create --name=nova --pass=service_pass --email=nova@domain.com
keystone user-role-add --user=nova --tenant=service --role=admin

设置endpoint

keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://192.168.100.11:8774/v2/%\(tenant_id\)s \
--internalurl=http://10.0.0.11:8774/v2/%\(tenant_id\)s \
--adminurl=http://10.0.0.11:8774/v2/%\(tenant_id\)s

编辑 /etc/nova/nova.conf

下面是我的nova.conf 文件的全部内容

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

rpc_backend = rabbit
rabbit_host = 10.0.0.11
my_ip = 10.0.0.11
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass

[database]
connection = mysql://nova:NOVA_DBPASS@10.0.0.11/nova

删除sqlite数据库

rm /var/lib/nova/nova.sqlite

初始化nova数据库

nova-manage db sync

重启nova相关服务

service nova-api restart
service nova-cert restart
service nova-conductor restart
service nova-consoleauth restart
service nova-novncproxy restart
service nova-scheduler restart

检查

# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        controller                           internal         enabled    :-)   2014-08-26 14:13:08
nova-consoleauth controller                           internal         enabled    :-)   2014-08-26 14:13:08
nova-conductor   controller                           internal         enabled    :-)   2014-08-26 14:13:08
nova-scheduler   controller                           internal         enabled    :-)   2014-08-26 14:13:08

 

Neutron

控制节点,也是需要安装Neutron server

apt-get install -y neutron-server neutron-plugin-ml2

创建Neutron数据库 mysql –u root –p

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO neutron@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO neutron@'%' IDENTIFIED BY 'NEUTRON_DBPASS';

exit;

keystone创建neutron用户和角色

keystone user-create --name=neutron --pass=service_pass --email=neutron@domain.com
keystone user-role-add --user=neutron --tenant=service --role=admin

注册服务和endpoint

keystone service-create --name=neutron --type=network --description="OpenStack Networking"

keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ network / {print $2}') \
--publicurl=http://192.168.100.11:9696 \
--internalurl=http://10.0.0.11:9696 \
--adminurl=http://10.0.0.11:9696

编辑 /etc/neutron/neutron.conf,关键的是nova_admin_tenant_id 需要你手工用命令获得,再填写

keystone tenant-list | awk '/ service / { print $2 }'
#core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin = ml2

# service_plugins =
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering
service_plugins = router

# auth_strategy = keystone
auth_strategy = keystone

# allow_overlapping_ips = False
allow_overlapping_ips = True

rpc_backend = neutron.openstack.common.rpc.impl_kombu

rabbit_host = 10.0.0.11

notification_driver = neutron.openstack.common.notifier.rpc_notifier

# ======== neutron nova interactions ==========
# Send notification to nova when port status is active.
notify_nova_on_port_status_changes = True

# Send notifications to nova when port data (fixed_ips/floatingips) change
# so nova can update it's cache.
notify_nova_on_port_data_changes = True

# URL for connection to nova (Only supports one nova region currently).
nova_url = http://10.0.0.11:8774/v2

# Name of nova region to use. Useful if keystone manages more than one region
# nova_region_name =

# Username for connection to nova in admin context
nova_admin_username = nova

# The uuid of the admin nova tenant
nova_admin_tenant_id = cb45c886bc094f65940ba29d79eab8aa

# Password for connection to nova in admin context.
nova_admin_password = service_pass

# Authorization URL for connection to nova in admin context.
nova_admin_auth_url = http://10.0.0.11:35357/v2.0

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#signing_dir = $state_path/keystone-signing
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass

[database]
# This line MUST be changed to actually run the plugin.
# Example:
# connection = mysql://root:pass@127.0.0.1:3306/neutron
# Replace 127.0.0.1 above with the IP address of the database used by the
# main neutron server. (Leave it as is if the database runs on this host.)
#connection = sqlite:////var/lib/neutron/neutron.sqlite
connection = mysql://neutron:NEUTRON_DBPASS@10.0.0.11/neutron

编辑 /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

 

编辑/etc/nova/nova.conf, 让nova支持neutron,在[DEFAULT] 添加

network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.0.0.11:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=service_pass
neutron_admin_auth_url=http://10.0.0.11:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

重启nova服务

service nova-api restart
service nova-scheduler restart
service nova-conductor restart

重启neutron服务

service neutron-server restart

查看log

root@controller:~# tail -f /var/log/neutron/*
2014-09-02 07:27:53.950 5373 WARNING neutron.api.extensions [-] Extension fwaas not supported by any of loaded plugins
2014-09-02 07:27:53.952 5373 WARNING neutron.api.extensions [-] Extension flavor not supported by any of loaded plugins
2014-09-02 07:27:53.962 5373 WARNING neutron.api.extensions [-] Extension lbaas_agent_scheduler not supported by any of loaded plugins
2014-09-02 07:27:53.967 5373 WARNING neutron.api.extensions [-] Extension lbaas not supported by any of loaded plugins
2014-09-02 07:27:53.969 5373 WARNING neutron.api.extensions [-] Extension metering not supported by any of loaded plugins
2014-09-02 07:27:53.973 5373 WARNING neutron.api.extensions [-] Extension port-security not supported by any of loaded plugins
2014-09-02 07:27:53.977 5373 WARNING neutron.api.extensions [-] Extension routed-service-insertion not supported by any of loaded plugins

日志里显示找不到插件,这都是正常的。

 

Horizon

Dashboard的安装,倒是比较简单,不需要创建数据库。

apt-get install -y apache2 memcached libapache2-mod-wsgi openstack-dashboard

编辑 /etc/openstack-dashboard/local_settings.py

#ALLOWED_HOSTS = ['horizon.example.com', ]
ALLOWED_HOSTS = ['localhost','192.168.100.11']

#OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_HOST = "10.0.0.11"

重启apache服务

service apache2 restart; service memcached restart

这个时候,你可以通过http://192.168.100.11/horizon

看到登录界面,应该是无法登录。

安装Openstack client端

在控制节点装上Openstack的client端,这样会方便很多,很多Neutron的操作,你都可以进行

apt-get -y install python-openstackclient

网络节点

看图理解的更好,这图来自redhat的官方文档。

2476

网络节点需要3块网卡。经常有朋友问,1块网卡是否可以。其实1块网卡肯定也是可以的,不过不利于大家理解。不过大家都很难找到3块网卡的机器,所以在IaaS下来测试,就方便很多。

network

创建一个虚拟机,名字为:network, 删除网卡,并且添加3块网卡。ssh到虚拟机上,默认是无法访问外网的,原因也很简单,没有默认路由,手工添加默认路由就可以。

由于网络节点,比较特殊,我们需要把网卡的Ip设置成固定 /etc/netwrok/interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# Source interfaces
# Please check /etc/network/interfaces.d before changing this file
# as interfaces may have been defined in /etc/network/interfaces.d
# NOTE: the primary ethernet device is defined in
# /etc/network/interfaces.d/eth0
# See LP: #1262951
#source /etc/network/interfaces.d/*.cfg
# The management network interface
  auto eth0
  iface eth0 inet static
  address 10.0.0.21
  netmask 255.255.255.0

# VM traffic interface
  auto eth1
  iface eth1 inet static
  address 10.0.1.21
  netmask 255.255.255.0

# The public network interface
 auto eth2
 iface eth2 inet static
 address 192.168.100.21
 netmask 255.255.255.0
 gateway 192.168.100.1
 dns-nameservers 114.114.114.114

设置完毕,重启虚拟机。

这个时候,你就可以访问外网,安装包。

apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade

同步时间

apt-get install -y ntp

编辑 /etc/ntp.conf

server 10.0.0.11

重启NTP服务

service ntp restart

安装基础组件

apt-get install -y vlan bridge-utils

编辑 /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

检测

sysctl -p

安装Neutron组件

apt-get install -y neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
dnsmasq neutron-l3-agent neutron-dhcp-agent

编辑 /etc/neutron/neutron.conf , 这里修改的内容,比控制节点少很多。

#core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin = ml2

# service_plugins =
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering
service_plugins = router

# The strategy to be used for auth.
# Supported values are 'keystone'(default), 'noauth'.
auth_strategy = keystone

allow_overlapping_ips = True

rpc_backend = neutron.openstack.common.rpc.impl_kombu

rabbit_host = 10.0.0.11

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#signing_dir = $state_path/keystone-signing
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass

编辑 /etc/neutron/l3_agent.ini

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

编辑 /etc/neutron/dhcp_agent.ini

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True

编辑 /etc/neutron/metadata_agent.ini

auth_url = http://10.0.0.11:5000/v2.0
auth_region = regionOne

admin_tenant_name = service
admin_user = neutron
admin_password = service_pass
nova_metadata_ip = 10.0.0.11
metadata_proxy_shared_secret = helloOpenStack

登录控制节点,修改 /etc/nova.conf 在[DEFAULT] 加入下面内容

service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = helloOpenStack

重启nova api服务

service nova-api restart

编辑 /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[ovs]
local_ip = 10.0.1.21
tunnel_type = gre
enable_tunneling = True

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

重启openvswitch

service openvswitch-switch restart

创建br-ex

创建br-ex连接外网,这个不太好理解,看图

大概意思是:我们创建一个bridge br-ex,把br-ex绑定在eth2下,eth2是连接到公网的路由器上的。

Snap1

ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2

下面内容是我操作的结果,大家慢慢理解.

 

编辑 /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# Source interfaces
# Please check /etc/network/interfaces.d before changing this file
# as interfaces may have been defined in /etc/network/interfaces.d
# NOTE: the primary ethernet device is defined in
# /etc/network/interfaces.d/eth0
# See LP: #1262951
#source /etc/network/interfaces.d/*.cfg
# The management network interface
  auto eth0
  iface eth0 inet static
  address 10.0.0.21
  netmask 255.255.255.0

# VM traffic interface
  auto eth1
  iface eth1 inet static
  address 10.0.1.21
  netmask 255.255.255.0

# The public network interface
# auto eth2
# iface eth2 inet static
# address 192.168.100.21
# netmask 255.255.255.0
# gateway 192.168.100.1
# dns-nameservers 114.114.114.114

auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-ex
iface br-ex inet static
address 192.168.100.21
netmask 255.255.255.0
gateway 192.168.100.1
dns-nameservers 114.114.114.114

重启虚拟机

替换br-ex和eth2的mac地址

由于网络的限制,目前192.168.100.21和192.168.100.11是无法通讯的,原因是因为出于安全的考虑,对网络访问的mac地址和ip地址做了绑定和限制。

通过ifconfig 查看网卡的mac地址,通过命令,把mac地址互换。

  • br-ex mac 地址 c2:32:7d:cf:9d:4
  • eth2 mac地址 fa:16:3e:80:5d:e6
ip link set eth2 addr c2:32:7d:cf:9d:43
ip link set br-ex addr fa:16:3e:80:5d:e6

这个时候,外部网络的IP就可以互相访问。这些修改是临时性的,如果重启neutron服务,mac地址就会恢复。不过我们实验不需要重启服务。这里提供的是临时的方法,后面有彻底解决问题的办法。

 

设置环境变量

cat <<EOF >>/root/creds
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_AUTH_URL="http://192.168.100.11:5000/v2.0/"
EOF

这样你就可以看到安装的agent

source creds
neutron agent-list

 

# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+
| id                                   | agent_type         | host    | alive | admin_state_up |
+--------------------------------------+--------------------+---------+-------+----------------+
| 3a80d2ea-bcf6-4835-b125-55144948024c | Open vSwitch agent | network | :-)   | True           |
| 4219dd20-c4fd-4586-b2fc-c81bec0015d6 | L3 agent           | network | :-)   | True           |
| e956687f-a658-4226-a34f-368da61e9e44 | Metadata agent     | network | :-)   | True           |
| f3e841f8-b803-4134-9ba6-3152c3db5592 | DHCP agent         | network | :-)   | True           |
+--------------------------------------+--------------------+---------+-------+----------------+

 

计算节点

 

compute

 

创建一个虚拟机,名字为:compute1, 删除网卡,并且添加2块网卡。ssh到虚拟机上.

计算节点默认是不需要接公网,不过由于我需要安装包,必须联网,所以你可以创建完虚拟机后,给虚拟机连接到外部网络,装完系统后,再断开就可以。

route add default gw 192.168.100.1

这个时候,你就可以访问外网,安装包。

apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade

同步时间

apt-get install -y ntp

编辑 /etc/ntp.conf

server 10.0.0.11

重启NTP服务

service ntp restart

安装kvm套件

apt-get install -y kvm libvirt-bin pm-utils

安装计算节点组件

apt-get install -y nova-compute-kvm python-guestfs

让内核只读

dpkg-statoverride  --update --add root root 0644 /boot/vmlinuz-$(uname -r)

创建脚本 /etc/kernel/postinst.d/statoverride

#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}

允许运行

chmod +x /etc/kernel/postinst.d/statoverride

编辑 /etc/nova/nova.conf 文件,添加下面内容

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

auth_strategy = keystone
rpc_backend = rabbit
rabbit_host = 10.0.0.11
my_ip = 10.0.0.31
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://192.168.100.11:6080/vnc_auto.html
glance_host = 10.0.0.11
vif_plugging_is_fatal=false
vif_plugging_timeout=0


[database]
connection = mysql://nova:NOVA_DBPASS@10.0.0.11/nova

[keystone_authtoken]
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass

删除sqlite

rm /var/lib/nova/nova.sqlite

重启compute服务

service nova-compute restart

编辑 /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

马上生效

sysctl -p

安装网络组件

apt-get install -y neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent

编辑 /etc/neutron/neutron.conf

#core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
core_plugin = ml2

# service_plugins =
# Example: service_plugins = router,firewall,lbaas,vpnaas,metering
service_plugins = router

auth_strategy = keystone

allow_overlapping_ips = True

rpc_backend = neutron.openstack.common.rpc.impl_kombu

rabbit_host = 10.0.0.11

[keystone_authtoken]
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#signing_dir = $state_path/keystone-signing
auth_uri = http://10.0.0.11:5000
auth_host = 10.0.0.11
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = service_pass

编辑  /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[ovs]
local_ip = 10.0.1.31
tunnel_type = gre
enable_tunneling = True

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

重启OVS

service openvswitch-switch restart

再编辑 /etc/nova/nova.conf ,在[DEFAULT]里添加下面

network_api_class = nova.network.neutronv2.api.API
neutron_url = http://10.0.0.11:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = service_pass
neutron_admin_auth_url = http://10.0.0.11:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

编辑 /etc/nova/nova-compute.conf ,修改为使用qemu

[DEFAULT]
compute_driver=libvirt.LibvirtDriver
[libvirt]
virt_type=qemu

重启相关服务

service nova-compute restart
service neutron-plugin-openvswitch-agent restart

安装就全部完成。

登录控制节点

root@controller:~# source creds 
root@controller:~# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-cert        controller                           internal         enabled    :-)   2014-09-02 10:31:03
nova-conductor   controller                           internal         enabled    :-)   2014-09-02 10:31:04
nova-scheduler   controller                           internal         enabled    :-)   2014-09-02 10:30:58
nova-consoleauth controller                           internal         enabled    :-)   2014-09-02 10:31:00
nova-compute     compute1                             nova             enabled    :-)   2014-09-02 10:30:57
root@controller:~# 

 

命令行创建虚拟机

在控制节点上,运行下面的命令就可以。镜像我上面已经上传。下面的操作,你完全可以在Dashboard里进行操作,这里命令行下,了解更加深入。

下面的操作,在控制节点完成。

创建外部网络

source creds

#Create the external network:
neutron net-create ext-net --shared --router:external=True

#Create the subnet for the external network:
neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=192.168.100.101,end=192.168.100.200 \
--disable-dhcp --gateway 192.168.100.1 192.168.100.0/24

给租户创建内部网络

#Create the internal network:
neutron net-create int-net

#Create the subnet for the internal network:
neutron subnet-create int-net --name int-subnet \
--dns-nameserver 114.114.114.114 --gateway 172.16.1.1 172.16.1.0/24

创建路由,并且连接到外部网络

#Create the router:
neutron router-create router1

#Attach the router to the internal subnet:
neutron router-interface-add router1 int-subnet

#Attach the router to the external network by setting it as the gateway:
neutron router-gateway-set router1 ext-net

创建密钥

ssh-keygen

添加公钥

nova keypair-add --pub-key ~/.ssh/id_rsa.pub key1

设置安全组

# Permit ICMP (ping):
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

# Permit secure shell (SSH) access:
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

创建虚拟机

NET_ID=$(neutron net-list | awk '/ int-net / { print $2 }')
nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id=$NET_ID \
--security-group default --key-name key1 instance1

查看虚拟机

nova list

申请公网IP

neutron floatingip-create ext-net

关联floating IP

nova floating-ip-associate instance1 192.168.100.102

这个时候,你会发现你在控制节点上,根本是无法访问 router 192.168.100.101和floating ip 192.168.100.102。

访问虚拟机,你需要登录网络节点上,你可以用下面命令访问虚拟机

# ip netns
qdhcp-bf7f3043-d696-4735-9bc7-8c2e4d95c8d5
qrouter-7e8bbb53-1ea6-4763-a69c-a0c875b5224b

第一个的虚拟机,第二个是路由器

# ip netns exec qdhcp-bf7f3043-d696-4735-9bc7-8c2e4d95c8d5 ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1216 (1.2 KB)  TX bytes:1216 (1.2 KB)

tap1a85db16-da Link encap:Ethernet  HWaddr fa:16:3e:ce:e0:e2  
          inet addr:172.16.1.3  Bcast:172.16.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fece:e0e2/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:415 errors:0 dropped:0 overruns:0 frame:0
          TX packets:105 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:64724 (64.7 KB)  TX bytes:10228 (10.2 KB)

Instance-creation

 

访问公网

你可能发现一个很明显的问题,你在网络节点是可以ping 通虚拟机的floating IP,router的IP,不过你在控制节点是无法访问的。

如果希望比较完美,实现虚拟机可以ping通公网,那么需要我们多了解一下内容才行。可以发现全部的流量都是通过192.168.100.21这个端口出去,我们需要设置一下这个端口,运行所有的IP和mac地址通过。

登录网络节点,通过ping 192.168.100.101 和192.168.100.102 ,获得他们的mac地址。

# arp -a
? (10.0.0.11) at fa:16:3e:34:d0:7a [ether] on eth0
? (192.168.100.102) at fa:16:3e:0c:be:cd [ether] on br-ex
? (10.0.1.31) at fa:16:3e:eb:96:1c [ether] on eth1
? (192.168.100.101) at fa:16:3e:0c:be:cd [ether] on br-ex
? (192.168.100.1) at fa:16:3e:c2:a8:a8 [ether] on br-ex

下面的操作,你可以在控制节点完成

通过curl获取token

使用token,修改192.168.100.21 port 的allow_address_pairs ,可以顺便把eth2和br-ex也修改,这样就不担心重启服务。

详细的操作,就参考这篇文档就可以。

http://www.chenshake.com/use-the-uos-api/

 

 

vnc访问

如果你登录Horizon,访问虚拟机,vnc可能无法访问,你需要登录uos,修改安全组规则。默认第一个虚拟机使用vnc的端口是6080。或者你全部打开端口。

Snap2

参考资料

http://oddbit.com/rdo-hangout-multinode-packstack-slides/#/

https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst

参考文档 http://blog.oddbit.com/2014/05/23/open-vswitch-and-persistent-ma/

ovs-vsctl操作

root@network:~# ovs-vsctl show
533105dd-bd0d-4af1-a331-c9394fbcb775
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.0.2"
root@network:~# ovs-vsctl add-br br-ex
root@network:~# ovs-vsctl show        
533105dd-bd0d-4af1-a331-c9394fbcb775
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.0.2"
root@network:~# ovs-vsctl add-port br-ex eth2
root@network:~# ovs-vsctl show
533105dd-bd0d-4af1-a331-c9394fbcb775
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.0.2"

网络节点重启服务

service neutron-plugin-openvswitch-agent restart
service neutron-dhcp-agent restart
service neutron-l3-agent restart
service neutron-metadata-agent restart
service dnsmasq restart

  34 Responses to “UnitedStack UOS安装多节点OpenStack”

  1. 沙克出品 必是精品 看图说话 仔细详实

  2. 标题是不是应该是UnitedStack下安装多节点OpenStack

  3. Hi, 陈沙克

    我按照文档安装两个节点的openstack.
    neutron agent-list命令没有输出, 卡在那,报错 (File “/usr/lib/python2.7/dist-packages/neutronclient/client.py”, line 246, in authenticate
    raise exceptions.Unauthorized(message=resp_body)
    Unauthorized: )

    而且Keystone.log 里面没有任何信息(debug= true),如果Nova list的话,keystone就会有输出。所以我觉得问题出在neutronclient连request都没发给keystone?

    1.环境变量我设置好了,nova list 就没有问题。
    2. neutron.conf 我也配置好了。
    3. Keystone neutron 账户密码也设置好了。

    • 你更换 br-ex和eth2的mac地址没有。这块你要小心点,照做,不然就会出现这种情况。

      主要原因是当你运行 ovs-vsctl add-port br-ex eth2,你的eth2,就已经无法访问外网。你需要修改mac地址才行。

    • 你可以试一下,ovs-vsctl del-port br-ex eth2

      重启服务,这个时候 neutron agent-list 就正常。

    • 找到原因了。。。。非常奇怪。。。。unset http_proxy https_proxys 就可以。。。。但是奇怪的是nova list 是不需要Unset 就可以的。。。。难道novaclient 和neutron client http 实现不一致?后面研究一下。。。

  4. 非常详细,谢谢
    1 `mysql –u root –p’ 中的短横线是中文的,copy/paste会出错
    2. 开始访问虚拟机是否是VNC,而不是vpn ?

    • 我也发现了,不过太多地方,没改干净。如果你访问vnc,你肯定是需要用pptp vpn拨入才行。

  5. 您好,我在安装keystone的时候出现问题:
    root@controller:~# apt-get install -y keystone
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:

    The following packages have unmet dependencies:
    keystone : Depends: python-keystone (= 1:2014.2+git201407191001~trusty-0ubuntu1) but it is not going to be installed
    E: Unable to correct problems, you have held broken packages.
    root@controller:~#

    然后我又安装python-keystone,然后发现这样:
    root@controller:~# apt-get install -y python-keystone
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:

    The following packages have unmet dependencies:
    python-keystone : Depends: python-sqlalchemy (< 0.9) but 0.9.7-1~cloud0 is to be installed
    Depends: python-oslo.db but it is not going to be installed
    E: Unable to correct problems, you have held broken packages.
    root@controller:~#

    请问您,这是什么问题呢?

  6. 赞啊。“计算节点 eth1(10.0.1.21)”输入有错,应该是10.0.1.31吧。

  7. 陈老师,OpenStack多个数据中心是不是多个region的意思。要实现多个region,那每个region是不是要有单独的endpoint以及nova、neutron等组件,那数据库是如何组织呢,每个region有一个单独的各组件数据库,以及共享一个keystone数据库?

  8. hi shark,

    不好意思,我刚开始学openstack,照猫画虎。不知道哪里错了,我建的第一个controller node,为什么连不上外网,看路由器设置是链接了两端(public network 和openstack的公网),controller node的一块网卡也是链接在openstack的公网上。

    我做:apt-get update -y && apt-get upgrade -y && apt-get dist-upgrade 出不来结果,
    好像能ping 通 192.168.100.1,但再往外就不通了。

    多谢!

  9. 陈老师。我按照您的教程在多虚拟机环境下多节点安装openstack,但是Dashboard服务一直无法在浏览器web页面登录,会出现如下错误:
    It works!
    This is the default web page for this server.
    The web server software is running but no content has been added,yet.
    同时的/var/www/目录下只有一个index.html文件,这是为什么呢?

    • 你好,我也是在UOS上搭建的多节点OpenStack,不过我一直没理解怎么通过DashBoard访问?在UOS 上申请的都是只能VNC访问的云主机,看到的只有黑乎乎的终端。难道各位说的是可以从本队浏览器访问UOS上搭建的OpenStack?具体如何,还望不吝赐教。谢谢!:-)

  10. 陈老师,计算节点,配置文件配了eth0和eth1,route add也加了,显示 SIOCADDRT:network is unreachable.是不是还要添加eth2?

  11. 陈老师,网络节点那块,添加了br-ex,设置了interface,重启后 br-ex Mac与 eth2一样了呢,直接就能ping通控制节点。

    • 好像不能重启,一旦重启,修改的mac地址就恢复原来的。最有效的办法就是通过api修改,不过这个比较麻烦,没想好这块如何写文档。

  12. 陈老师,现在遇到一个问题,nova-manage service list 里没有compute,neutron agent-list里什么都没有

  13. 陈老师,您好!
    我现在使用Redhat7的OS,通过devstack部署了一个简单的Openstack开发环境。stack.sh脚本执行成功,整个过程没有报错,查看各服务nova、cinder、glance、horizon等都正常起来了,但是我在浏览器输入http://58.251.159.43和http://58.251.159.43/horizon都登陆不上dashboard,能帮忙解答下吗?非常感谢!

  14. 请问一下,创建出来的三个网络(公网,管理网段,和虚拟机通讯网段)都是GRE类型的么 ?
    如果是的话,openstack 上的openstack创建出来的实例能否做到外部的机子能访问到它?
    谢谢!

 Leave a Reply

(required)

(required)

This site uses Akismet to reduce spam. Learn how your comment data is processed.