Sep 212013
 

上次看到Rackspace的官方blog,介绍如何利用Rackspace的公用云来测试Openstack,这个倒是比较有意思,当年测试vmware的ESX的过程,浮现在眼前。个人观点,一个IAAS是否成熟,就看能不能自己测试自己。现在外面讨论Openstack培训很多,我比较关心的问题就是:是否可以在Openstack的环境下进行配置,让学员搭建自己的Openstack环境。总不能天天喊虚拟化,但是培训的时候,却要使用实体服务器,有点说不过去。

原文:http://www.rackspace.com/blog/installing-rackspace-private-cloud-in-20-minutes-or-less/

Rackspace的公用云,是基于Openstack搭建,底层使用的是Xen Server。而Rackspace推出Openstack私有云,是采用KVM。目前Rackspace的私有云,是Grizzly版本,只支持Nova network。采用Chef管理。

除了测试Rackspace的私有云,也顺便体验了一下Rackspace的公用云,发现改变还是很大的。

 

创建2个虚拟机

Chef server:采用512M的flavor就可以,名字为:chef

控制节点+计算节点:2G内存,名字为:petcattle

创建虚拟机,就一个页面,网络和硬盘分区比较特别。可以自己创建网络。

Snap1

 

点击创建,弹出随机生成的密码,以前Rackspace的密码是发送到邮箱,并且可以重复获得,现在改成只能看到一次,估计也是为了提高安全性。

Snap2

大概需要2,3分钟创建完成。很明显,要想很好监控,就需要安装agent。DNS的功能,其实很方便。

Snap3

记录一下他的flavor

Snap4

 

初始化设置

ssh登陆两台虚拟机

Chef

Snap5

控制节点

Snap6

把下面内容更新两台主机的/etc/hosts 文件,确保包括下面两行。

119.9.12.73    chef
119.9.12.166    petcattle

 

创建秘钥

在Chef虚拟机,创建秘钥。这是重点,Chef是通过秘钥去管理各个节点。可以什么都不输入,一路回车就可以。

ssh-keygen

 

Snap7

把公钥复制到另外一台虚拟机上

ssh-copy-id root@petcattle

 

会提示输入远程机器的密码。

安装Chef

后续的操作,都应该在Chef上完成。

 

curl -s -L https://raw.github.com/rcbops/support-tools/master/chef-install/install-chef-server.sh | bash

 

Rackspace做的不错,专门有一个Ubuntu的源,这其实是国内做公用云都需要考虑的,这样可以大大方便用户。

安装cookbook

curl -s -L https://raw.github.com/rcbops/support-tools/master/chef-install/install-cookbooks.sh | bash

运行完毕后,退出登录,使环境变量生效

检查Chef是否安装正确

root@chef:~# knife client list
admin
chef-validator
chef-webui

 

设置Chef

针对网络,对Chef进行设置,可以对比一下原文,需要修改一下公网的IP就可以。

root@chef:~# cat /root/rpcs.json 
{
  "name": "rpcs",
  "description": "Environment for Rackspace Private Cloud (Grizzly)", 
  "cookbook_versions": {
  },
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "default_attributes": {
  },
  "override_attributes": {
    "nova": {
      "libvirt": {
        "virt_type": "qemu"
      },
      "networks": [
        {
          "label": "public",
          "bridge_dev": "eth1",
          "dns1": "8.8.8.8",
          "dns2": "8.8.4.4",
          "num_networks": "1",
          "ipv4_cidr": "10.0.100.0/24",
          "network_size": "255",
          "bridge": "br100"
        }
      ]
    },
    "mysql": {
      "allow_remote_root": true,
      "root_network_acl": "%"
    },
    "osops_networks": {
      "nova": "119.9.12.0/24",
      "public": "119.9.12.0/24",
      "management": "119.9.12.0/24"
    }
  }

 

上传文件

# knife environment from file /root/rpcs.json 
Updated Environment rpcs

 

看图

RPConRPC.1.2

 

安装Openstack

通过Chef在另外一个vm上安装控制节点和计算节点

knife bootstrap petcattle -E rpcs -r 'role[allinone]'

 

装完后,你会看到提示

Snap8

你登陆petcattle这个虚拟机,检查Openstack的安装情况

查看一下openrc文件内容,你也就知道登陆Dashboard的密码

root@petcattle:~# cat openrc 
# This file autogenerated by Chef
# Do not edit, changes will be overwritten

# COMMON OPENSTACK ENVS
export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://119.9.12.166:5000/v2.0/
export OS_AUTH_STRATEGY=keystone
export OS_NO_CACHE=1

# LEGACY NOVA ENVS
export NOVA_USERNAME=${OS_USERNAME}
export NOVA_PROJECT_ID=${OS_TENANT_NAME}
export NOVA_PASSWORD=${OS_PASSWORD}
export NOVA_API_KEY=${OS_PASSWORD}
export NOVA_URL=${OS_AUTH_URL}
export NOVA_VERSION=1.1
export NOVA_REGION_NAME=RegionOne

# EUCA2OOLs ENV VARIABLES
export EC2_ACCESS_KEY=
export EC2_SECRET_KEY=
export EC2_URL=http://119.9.12.166:8773/services/Cloud

验证一下

 

root@petcattle:~# source openrc 
root@petcattle:~# keystone user-list
+----------------------------------+------------+---------+-------+
|                id                |    name    | enabled | email |
+----------------------------------+------------+---------+-------+
| 8335641ee546469baa15c5144573c14f |   admin    |   True  |       |
| b0cf47519352432899d728fdb0859eb1 |   cinder   |   True  |       |
| e82c55763da94f0ea34a3698068d7e3b |   glance   |   True  |       |
| 491fe0dd092045a89fae0694957c44ed | monitoring |   True  |       |
| 02aed7c087714e31a2f75361739afeeb |    nova    |   True  |       |
+----------------------------------+------------+---------+-------+

 

 

root@petcattle:~# nova service-list
+------------------+-----------+----------+---------+-------+----------------------------+
| Binary           | Host      | Zone     | Status  | State | Updated_at                 |
+------------------+-----------+----------+---------+-------+----------------------------+
| nova-cert        | petcattle | internal | enabled | up    | 2013-09-20T19:35:22.000000 |
| nova-compute     | petcattle | nova     | enabled | up    | 2013-09-20T19:35:26.000000 |
| nova-conductor   | petcattle | internal | enabled | up    | 2013-09-20T19:35:19.000000 |
| nova-consoleauth | petcattle | internal | enabled | up    | 2013-09-20T19:35:23.000000 |
| nova-network     | petcattle | internal | enabled | up    | 2013-09-20T19:35:27.000000 |
| nova-scheduler   | petcattle | internal | enabled | up    | 2013-09-20T19:35:19.000000 |
+------------------+-----------+----------+---------+-------+----------------------------+

 

用命令上传一个image到glance

glance image-create --disk-format qcow2 --container-format \
bare --name "Ubuntu 12.04.1 Precise (cloudimg)" \
--copy-from http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img \
--is-public true

 

查看image

root@petcattle:~# glance image-list
+--------------------------------------+-----------------------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                              | Disk Format | Container Format | Size      | Status |
+--------------------------------------+-----------------------------------+-------------+------------------+-----------+--------+
| 0c18812f-3b2c-4e41-9180-d1261c9803f9 | cirros-0.3.0-x86_64-uec-initrd    | ari         | ari              | 2254249   | active |
| 1d4101ea-3c48-4662-a3de-3c7ffb8512a0 | cirros-0.3.0-x86_64-uec-kernel    | aki         | aki              | 4731440   | active |
| 0b4135d3-cdf4-46b4-aa18-306fe915bdeb | cirros-image                      | ami         | ami              | 25165824  | active |
| 64e93856-618a-4c76-99ad-07faab28fcb4 | Ubuntu 12.04.1 Precise (cloudimg) | qcow2       | bare             | 253820928 | saving |
+--------------------------------------+-----------------------------------+-------------+------------------+-----------+--------+

 

登陆Dashbaord

这就很简单,直接访问控制节点的vm的公网IP就可以.

  • user:admin
  • pass: secrete

默认密码是固定,而不是随机生成,这估计就是Rackspace和Redhat的区别。

备注

记录一下重要配置文件设置

nova.conf

# This file autogenerated by Chef
# Do not edit, changes will be overwritten
[DEFAULT]

# LOGS/STATE
verbose=true
debug=false
auth_strategy=keystone

logdir=/var/log/nova
#log_config=/etc/nova/logging.conf
state_path=/var/lib/nova
lock_path=/var/lock/nova

##### RABBITMQ #####
rabbit_password=guest
rabbit_port=5672
rabbit_host=119.9.12.166

##### SCHEDULER #####
# scheduler_manager=nova.scheduler.manager.SchedulerManager
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters=nova.scheduler.filters.standard_filters
# which filter class names to use for filtering hosts when not specified in the request.
scheduler_max_attempts=3
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter,CoreFilter,SameHostFilter,DifferentHostFilter,RetryFilter
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
default_availability_zone=nova
default_schedule_zone=nova

##### NETWORK #####
network_manager=nova.network.manager.FlatDHCPManager
multi_host=true
public_interface=eth0
fixed_range=10.0.100.0/24
dmz_cidr=10.128.0.0/24
force_dhcp_release=true
send_arp_for_ha=true
auto_assign_floating_ip=false
dhcp_domain=novalocal
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
libvirt_use_virtio_for_bridges=false
dnsmasq_config_file=/etc/nova/dnsmasq-nova.conf

##### GLANCE #####
image_service=nova.image.glance.GlanceImageService
glance_api_servers=119.9.12.166:9292

##### COMPUTE #####
compute_manager=nova.compute.manager.ComputeManager
sql_connection=mysql://nova:5j_KCKBx7kY6Y192tWVC@119.9.12.166/nova
connection_type=libvirt
compute_driver=libvirt.LibvirtDriver
libvirt_type=qemu
# Inject the ssh public key at boot time (default: true)
libvirt_inject_key=false
# Command prefix to use for running commands as root (default: sudo)
rootwrap_config=/etc/nova/rootwrap.conf
# Should unused base images be removed? (default: false)
remove_unused_base_images=true
# Unused resized base images younger than this will not be removed (default: 3600)
remove_unused_resized_minimum_age_seconds=3600
# Unused unresized base images younger than this will not be removed (default: 86400)
remove_unused_original_minimum_age_seconds=3600
# Write a checksum for files in _base to disk (default: false)
checksum_base_images=false

##### VNCPROXY #####
novncproxy_base_url=http://119.9.12.166:6080/vnc_auto.html
xvpvncproxy_base_url=http://119.9.12.166:6081/console

# This is only required on the server running xvpvncproxy
xvpvncproxy_host=119.9.12.166
xvpvncproxy_port=6081

# This is only required on the server running novncproxy
novncproxy_host=119.9.12.166
novncproxy_port=6080

vncserver_listen=119.9.12.166
vncserver_proxyclient_address=119.9.12.166

##### MISC #####
# force backing images to raw format
force_raw_images=false
allow_same_net_traffic=true
osapi_max_limit=1000
snapshot_image_format=qcow2
start_guests_on_host_boot=false
resume_guests_state_on_host_boot=false
# number of security groups per project (default: 10)
quota_security_groups=50
# number of security rules per security group (default: 20)
quota_security_group_rules=20
quota_fixed_ips=40
quota_instances=20
force_config_drive=false

# TODO(shep): not sure if this should go in partial scheduler-options
#             leaving here for now
# FilterScheduler Only Options
# virtual CPU to Physical CPU allocation ratio (default: 16.0)
cpu_allocation_ratio=16.0
# virtual ram to physical ram allocation ratio (default: 1.5)
ram_allocation_ratio=1.5

##### KEYSTONE #####
keystone_ec2_url=http://119.9.12.166:5000/v2.0/ec2tokens

##### VOLUMES #####
# iscsi target user-land tool to use
# NOTE(darren): (this is a nova-volume attribute - cinder carries this
# separately in it's own cinder.conf
iscsi_helper=tgtadm

# when in folsom...
volume_manager=cinder.volume.manager.VolumeManager
volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata
cinder_catalog_info=volume:cinder:publicURL

##### API #####
ec2_workers=2
osapi_compute_workers=2
metadata_workers=2
osapi_volume_workers=2
osapi_compute_listen=119.9.12.166
osapi_compute_listen_port=8774
ec2_listen=119.9.12.166
ec2_listen_port=8773
ec2_host=119.9.12.166

##### CEILOMETER #####
# disabled because ceilometer::ceilometer-compute is not in the run_list

 

 

keystone.conf

# This file autogenerated by Chef
# Do not edit, changes will be overwritten
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = PFQ1dWLByoUqMzEVwTX8

# The IP address of the network interface to listen on
bind_host = 0.0.0.0

# The port number which the public service listens on
public_port = 5000

# The port number which the public admin listens on
admin_port = 35357

# The base endpoint URLs for keystone that are advertised to clients
# (NOTE: this does NOT affect how keystone listens for connections)
# public_endpoint = http://localhost:%(public_port)d/
# admin_endpoint = http://localhost:%(admin_port)d/

# The port number which the OpenStack Compute service listens on
# This is only used in testing
# compute_port = 8774

# Path to your policy definition containing identity actions
# policy_file = policy.json

# Rule to check if no matching policy definition is found
# FIXME(dolph): This should really be defined as [policy] default_rule
# policy_default_rule = admin_required

# Role for migrating membership relationships
# During a SQL upgrade, the following values will be used to create a new role
# that will replace records in the user_tenant_membership table with explicit
# role grants.  After migration, the member_role_id will be used in the API
# add_user_to_project, and member_role_name will be ignored.
# member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab
# member_role_name = _member_
member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab

# === Logging Options ===
# Print debugging output
# (includes plaintext request logging, potentially including passwords)
debug = False

# Print more verbose output
verbose = False

# Name of log file to output to. If not set, logging will go to stdout.
log_file = keystone.log

# The directory to keep log files in (will be prepended to --logfile)
log_dir = /var/log/keystone

# Use syslog for logging.
# use_syslog = False

# syslog facility to receive log lines
# syslog_log_facility = LOG_USER

# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files.
# log_config = logging.conf

# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s

# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S

# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd

[sql]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:s8zKHunVsIKRNiMEG2zv@119.9.12.166/keystone

# the timeout before idle sql connections are reaped
idle_timeout = 200
min_pool_size = 5
max_pool_size = 10
pool_timeout = 200

[identity]
# This references the domain to use for all Identity API v2 requests (which are
# not aware of domains). A domain with this ID will be created for you by
# keystone-manage db_sync in migration 008.  The domain referenced by this ID
# cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API.
# There is nothing special about this domain, other than the fact that it must
# exist to order to maintain support for your v2 clients.
# default_domain_id = default

driver = keystone.identity.backends.sql.Identity

[trust]
# driver = keystone.trust.backends.sql.Trust

# delegation and impersonation features can be optionally disabled
# enabled = True

[catalog]
# dynamic, sql-based backend (supports API/CLI-based management commands)
driver = keystone.catalog.backends.sql.Catalog

# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog

# template_file = default_catalog.templates

[token]
driver = keystone.token.backends.sql.Token

# Amount of time a token should remain valid (in seconds)
expiration = 86400

[policy]
driver = keystone.policy.backends.sql.Policy

[ec2]
driver = keystone.contrib.ec2.backends.sql.Ec2

[ssl]
#enable = True
#certfile = /etc/keystone/ssl/certs/keystone.pem
#keyfile = /etc/keystone/ssl/private/keystonekey.pem
#ca_certs = /etc/keystone/ssl/certs/ca.pem
#cert_required = True
#key_size = 1024
#valid_days = 3650
#ca_password = None
#cert_required = False
#cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=localhost

[signing]
token_format = PKI
#certfile = /etc/keystone/ssl/certs/signing_cert.pem
#keyfile = /etc/keystone/ssl/private/signing_key.pem
#ca_certs = /etc/keystone/ssl/certs/ca.pem

 

cinder.conf

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
sql_connection = mysql://cinder:KmnYJNjRGnZhGbb6RGns@119.9.12.166/cinder
rabbit_host = 119.9.12.166
rabbit_port = 5672
osapi_volume_listen = 119.9.12.166
osapi_volume_listen_port = 8776
iscsi_ip_address = 119.9.12.166
storage_availability_zone = nova
max_gigabytes = 10000
notification_driver=cinder.openstack.common.notifier.rpc_notifier

#### STORAGE PROVIDER INFORMATION ####
volume_group=cinder-volumes
volume_clear=zero
volume_pool_size=None

[keystone_authtoken]
signing_dirname = /tmp/keystone-signing-cinder

最后我删除虚拟机,发现这个页面设计不错

Snap9

  4 Responses to “Rackspace公用云测试Rackspace私有云Openstack”

  1. 本来2.0版本的私有云不错,一张ISO搞定openstack,很有商业产品的样子。可惜从3.0开始改成chef方式了,比mirantis的fuel复杂多了

  2. 陈老师你好,之前参考你的文章(http://www.chenshake.com/ubuntu-12-04-openstack-essex-installation-single-node/)安装成功,但是发现在dashboard中最多只能创建10个虚拟机,请问这是为什么?谢谢!!

    • 因为quota的原因,默认每个用户就是10个虚拟机。你修改一下项目的quota就可以。web里找一下就可以。

 Leave a Reply

(required)

(required)

This site uses Akismet to reduce spam. Learn how your comment data is processed.