Multi-node OpenStack RDO IceHouse on AWS/EC2 and Google

OpenStack is awesome. But, in order to try out the latest releases you typically need more hardware and time.

Maybe you’ve always wanted to play with and never found the time? Or maybe you did install it, but you had to spend days scrounging for suitable hardware? Or maybe you’re an expert, but you have no way to quickly spin up and down entirely new installs?

If the answer is yes to any of these, then read on. In this post I’m going to share my experience with you setting up a non-trivial OpenStack installation on Amazon EC2 or the Google Compute Engine using the Ravello service.

The great thing about doing the installation in the cloud is that you don’t need your own hardware. And thanks to the unique design of Ravello, it is possible to do things that normally can’t be done in the cloud, such as:

For this blog post I used the latest OpenStack IceHouse release, but the instructions should work for Havana as well. There’s some small differences which I will mention when we get to them.

Check the video and slides from the free webinar we did on running OpenStack/KVM on AWS using Ravello:

OpenStack Cloud Software

Virtual Hardware configuration

I decided to use a total of 4 virtual machines for the setup. They look like this:

VMHardware (CPU / mem / storage)OpenStack Services
Controller2 CPUs / 4GB / 400GBNova, Glance, Horizon, KeystoneNetwork2 CPUs / 4GB / -Neutron Server, Neutron L2/L3/DHCP agents
Compute12 CPUs / 6GB / 200GBNova Compute
Compute22 CPUs / 6GB / 200GBNova Compute

All nodes have 2 CPUs, 4GB of RAM and a 32GB root volume, with the exception of the compute nodes that have 6GB of RAM instead of 4GB so they can better run virtual guests. The controller has an extra 400GB of block storage that will be used by Glance to store images. The compute nodes have an extra 200GB that is used by Nova Compute to store the instance files. In the Ravello web interface, the application looks like this:

Ravello Openstack VM design

The application also contains 3 networks:

NetworkCIDRGateway
external192.168.0.0/24192.168.0.1
management192.168.10.0/24(no external access)
vmdataLayer-2 only(not applicable)

The external network caters for external access. In theory, only the controller and network nodes need external access. However I enabled it for all nodes which makes it easier to do e.g. updates. The management network is used for communications between the openstack components themselves. The vmdata network is used for communications between tenant virtual machines. Each tenant network will be a VLAN on the vmdata network. This gives us fully isolated tenant networks where each tenant network has full layer-2 access and is in its own broadcast domain.

The VMs are connected to the networks in the following way:

Systemeth0eth1eth2
Controller192.168.0.10192.168.10.10(not connected)
Network192.168.0.11192.168.10.11Layer-2 only
Compute1192.168.0.12192.168.10.12Layer-2 only
Compute2192.168.0.13192.168.10.13Layer-2 only

In the Ravello web interface, the network looks like this:

Ravello Openstack network design

Two notes about the networking diagram. First, there are DHCP annotations on the switches which is not correct as DHCP is not enabled. This will be resolved in a future update. And secondly, the third network has a CIDR of 10.0.0.0/24 even if it only carries L2 traffic. This is because of a UI quirk where a value for the network address is required.

Operating System Installation and Configuration

For the software I decided to use the Red Hat RDO distribution on CentOS 6.5 64-bit. Red Hat is one of the leaders in the OpenStack community so it made sense to use their distribution. One of the nice things about RDO is that it comes with the “packstack” installer. Packstack can install a multi-node setup in an automated way, based on an “answer file”. (My first attempt was to try and follow the installation guide but I gave up after 2 days. THANK YOU packstack).

I installed the 4 VMs in the Ravello application by booting from the CentOS installation DVD that I uploaded into Ravello. I configured each VM as follows:

  • Installation using the “Minimum” profile.
  • The root file system is on an LVM logical volume.
  • On nodes that have extra block storage, it is added to the logical volume holding the root file system, and the root fs is extended.
  • yum update to get the latest updated
  • Enabled NTP.
  • SELinux left ON
  • Static networking configuration for all nodes.
  • The network node has a special network configuration. I created an openvswitch bridge called “br-ext” for external access, and I added the physical port “eth0″ to it. This is how packstack expects it to be. The ifcfg files are here and here.
  • There’s a single SSH key that is installed on every node and that allows each node to ssh to every other node. They key can only be used from the external and management networks. This is achieved by prefixing the public key in “~/.ssh/authorized_keys” with from=”192.168.0.0/16″
  • EPEL is installed.
  • Cloud-init is installed from EPEL, and configured so that it creates a “ravello” user with full sudo access. The password for “ravello” (and for “root”) are set to “ravelloCloud”. Non-secret passwords are OK in this case because OpenSSH is configured to only accept public key authentication, the keys for which are deployed by cloud-init. The local passwords are very useful when troubleshooting boot-time problems over the VNC console.

OpenStack Installation

I installed RDO using the “packstack” installer. The installer can be installed using:

$ yum install -y http://repos.fedorapeople.org/repos/openstack
    /openstack-icehouse/rdo-release-icehouse-3.noarch.rpm
$ yum install -y openstack-packstack

Once installed, I generated an “answer file” like this:

$ packstack --gen-answer-file answers.txt

Then I edited the answer file so that it will install OpenStack on our multi-node and multi-network setup. Below are the most important settings that I changed. First, to install keystone, I set:

CONFIG_KEYSTONE_HOST=192.168.10.10

This installs keystone on the controller node.

Nova is configured as follows:

CONFIG_NOVA_INSTALL=y
CONFIG_NOVA_(API|CERT|VNCPROXY|CONDUCTOR|SCHED)_HOST=192.168.10.10
CONFIG_NOVA_COMPUTE_HOSTS=192.168.10.12,192.168.10.13
CONFIG_NOVA_NETWORK_HOST=

This installs all Nova services on the controller node, with the exception of Nova Compute which goes on the 2 compute nodes. Note that Nova Network is not installed, because I use Neutron.

Glance is installed on the controller node by the following lines:

CONFIG_GLANCE_INSTALL=y
CONFIG_GLANCE_HOST=192.168.10.10

By default, Packstack will configure glance to store images in “/var/lib/glance” which has 400GB of free space on the controller node.

The Neutron configuration looks like this:

CONFIG_NEUTRON_INSTALL=y
CONFIG_NEUTRON_SERVER_HOST=192.168.10.11
CONFIG_NEUTRON_L3_HOSTS=192.168.10.11
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_DHCP_HOSTS=192.168.10.11
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_L2_PLUGIN=openvswitch
CONFIG_NEUTRON_METADATA_HOSTS=192.168.10.11
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=vmdata:1:4094
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth2:eth2

This installs the Neutron server, the L3 and DHCP agents and the metadata server on the network node. The L2 connectivity on the network node and on the compute nodes is provided by the the openvswitch plugin. It uses VLANs for connectivity between hosts and separation of traffic. A maximum of 4094 tenant networks are supported, which all go over the eth2 interface.

Finally the Openstack dashboard (Horizon) is installed by these two lines:

CONFIG_HORIZON_INSTALL=y
CONFIG_HORIZON_HOST=192.168.10.10

The resulting answer file that I used is here (IceHouse) and here (Havana). To run it, use:

$ packstack --answer-file answers.txt

Installation will take about 20 minutes.

Configuration Tweaks

I had to make a few configuration tweaks to make OpenStack work well. Some tweaks had to do with the fact that a Ravello application has an internal and an external view of DNS. A node that is called “controller.localdomain” on the inside, will be called “controller-<appname>-<random>.srv.ravcloud.com” on the outside. This is required besause you can have multiple instances of the same application. This gives a few problems though.

The first problem is that Horizon uses a CSRF protection implemented by the Django framework that will prevent it responding to requests that have come from the wrong site (though the HTTP “Referer” header). The packstack installer by default will try to set this up correctly, but because it doesn’t know the external host name, it does it wrong. To fix this, I needed to change the following setting in “/etc/openstack-dashboard/local_settings”:

ALLOWED_HOSTS = ['*']

Another tweak related to the split DNS view is that the link VNC console will redirect to the wrong place. To fix this, the following setting needs to be updated in “/etc/nova/nova.conf” on the compute nodes:

novncproxy_base_url=http://<external-hostname>:6080/vnc_auto.html

I didn’t want to hardcode the external host name, so I wrote a simple startup script that detects the external IP of the VNC proxy before OpenStack starts up, and changes it in nova.conf. The script is installed in “/etc/init.d”, and can be found here.

Another change that I did is to enable injection of the root password into a guest. This feature is by default enabled in Havana but not in IceHouse. It can be re-enabled by adding the following statement to “/etc/openstack-dashboard/local_settings”:

OPENSTACK_HYPERVISOR_FEATURES = {
  'can_set_password': True
}

In the IceHouse release there’s a missing feature in Packstack where it doesn’t configure a new mandatory setting in Neutron. The feature has been implemented upstream but is not yet part of RDO. I had to make the following manual change to /etc/neutron/neutron-server.conf on the network node:

nova_url = http://192.168.10.10:8774/v2
nova_admin_username = admin
nova_admin_tenant_id = <admin tenant id>
nova_admin_password = <admin password>
nova_admin_auth_url = http://192.168.10.10:5000/v2.0/

Last, and certainly not least, the following settings allows you to use the binary translation of (double) nested hardware virtualization support in Ravello. In “/etc/nova/nova.conf”, set:

libvirt_type=kvm

This will run our OpenStack guest instances at full speed!!

Logging in for the First Time

After I installed OpenStack, the Horizon dashboard was available on the http port of the public IP of the controller node. The username is “admin” and the password was stored by packstack in the file “/root/keystonerc_admin”. The dashboard looks like this:

ravello-openstack-horizon

On this screen you can see the two hypervisors and their CPU and memory that’s available. Note that the hypervisors are reported as QEMU even if KVM is enabled.

To run my first virtual machine, I took the following once-off preparation steps:

    • Upload an image into the Glance service. I used a Fedora 20 cloud image. The fact that Glance can download an image straight from a URL makes this very easy. To create the image, go to Admin -> Images -> Create Image.
    • Create a private network, a public network and a router between the private and the public network. I did this via the command-line at it didn’t seem possible to do all of these from the admin interface:
      $ source /root/keystonerc_admin
      $ neutron router-create router1
      $ neutron net-create private
      $ neutron subnet-create private 10.0.0.0/24 --name private_subnet \
          --enable-dhcp --gateway 10.0.0.1 --dns-nameserver 8.8.8.8
      $ neutron router-interface-add router1 private_subnet
      $ neutron net-create public --router:external=True
      $ neutron subnet-create public 192.168.0.0/24 --name public_subnet \
          --disable-dhcp --gateway 192.168.0.1 \
          --allocation_pool start=192.168.0.200,end=192.168.0.250
      $ neutron router-gateway-set <router-id> <subnet-id>
      

      Here, the public network is created using the Google DNS server 8.8.8.8. I had some trouble getting the OpenStack built-in DNS server to work. The private network is created using the gateway of the external network in the Ravello application, and I’ve also created an allocation pool for floating IPs. In the last command, the <router-id> and <subnet-id> are the router and subnet IDs returned by the subnet-create and router-create commands, respectively.

    • Update the flavor list if necessary in Admin -> Flavors. I updated the “m1.small” flavor to have 1GB of memory and 20GB of ephemeral storage. I deleted all flavors bigger than “m1.medium” as they are too big to run in this setup.

Launching an Instance

Once this is done, I was able to launch a new instance by following these steps:

      • Create a new instance by going to Project -> Instances -> Launch Instance.
      • Select the “m1.small” flavor, “boot from Image”, and then the Fedora image.
      • On the “Access and Security” tab it is recommend you set a root password.
      • On the “Networking” tab you should connect to the “private” network.
      • Finally click “Launch”.

It will take about 2 minutes for the Fedora instance to start. Once it’s started, you can access the console over VNC. Note that for some reason, I was not able to use the embedded console on the “Console” tab, but clicking the link “Click here to only show console” made it work. Below you see a screenshot of the console.

ravello-openstack-instance

The commands show the networking configuration, and that outbound connectivity works. The outbound connectivity of the network node is using port mapping, so you see that the outbound IP address corresponds to a virtual machine in Amazon EC2 which is where my application was running.

Instance Performance

I could not resist to sharing some initial results on performance. Many people have been trying to install development and test OpenStack setups on virtualized hardware using QEmu as the hypervisor. This will be very slow. Below is a table with a few simple benchmarks that comparing OpenStack/KVM vs OpenStack/QEmu. The benchmarks were run on a Fedora 20 guest.

BenchmarkOpenStack/KVMOpenStack/QEMUSpeedup
Dhrystone26.6 mlps1.06 mlps2,500%
Whetstone3,473 mwips253 mwips1,373%
Boot time103 secs390 secs379%

Summary

In this post I’ve shown that you can install OpenStack in Ravello to quickly get up and running with a multi-node OpenStack IceHouse installation. The setup that I’ve demonstrated uses the nested virtualization features of Ravello to run guests at full speed, and uses the full L2 access of our software defined network to create private tenant networks using VLANs.

As soon as we have released our nested SVM feature, we will make a Blueprint available with the above OpenStack configuration.

About Ravello Systems

Ravello is the industry’s leading nested virtualization and software-defined networking SaaS. It enables enterprises to create cloud-based development, test, UAT, integration and staging environments by automatically cloning their VMware-based applications in AWS. Ravello is built by the same team that developed the KVM hypervisor in Linux.

Check our product demo video
2 minutes product demo

Multi-node OpenStack RDO IceHouse on AWS/EC2 and Google
Geert Jansen

By Geert Jansen

Geert Jansen is Director, Product Marketing at Ravello Systems. In his role he is responsible for developer relations and technical marketing.
More posts from Geert Jansen
  • Pingback: Dell Open Source Ecosystem Digest #45 - Dell TechCenter - TechCenter - Dell Community

  • Tahder

    A good article to read.

    But I have a little confusion of the network configurations on the ifcfg files. Is the network node needs to be change only? as stated on the paragraph below.

    “The network node has a special network configuration. I created an openvswitch bridge called “br-ext” for external access, and I added the physical port “eth0″ to it. This is how packstack expects it to be. The ifcfg files are here and here.”

    How about the eth2? and br-eth2? referring to your vmdata. As heaps of documentations on it which made me also daunting. Mine are

    DEVICE=eth2
    ONBOOT=yes

    NM_CONTROLLED=no
    BOOTPROTO=none
    PROMISC=yes

    DEVICE=br-eth2
    TYPE=Ethernet

    ONBOOT=yes

    NM_CONTROLLED=no
    BOOTPROTO=static

    IPADDR=192.168.0.11
    NETMASK=255.255.255.0

    Hope you can help me.

    • Geert Jansen

      Hi Tahder,

      the bridge “br-eth2″ is created automatically by Packstack. You do not need to create it yourself. You only need to create “br-ext” yourself. Packstack cannot create br-ext, presumably because that might be the interfact by which it is accessing the remote system.

      • Tahder

        Thanks for the info Geert but I have another problems with my existing configurations as the instances can’t get an IP from the DHCP-Agent(neutron) if I do :
        neutron router-interface-add router1 private_subnet

        In short if I add a router interface to the private subnet created it doesn’t give an IP, but without it there is no problems and it gets an IP. But I need internet access on my instances, when I looked up on the router tab it was ACTIVE but the public_subnet is DOWN – External Gateway. Any hints or idea?

  • Dev

    Hi, can you please add the steps on how to add cinder node or cinder service on compute/controller?

    • Geert Jansen

      Hi Dev,

      I haven’t installed Cinder myself but it should be as simple as setting:

      CONFIG_CINDER_INSTALL=y
      CONFIG_CINDER_HOST=192.168.10.10

      in the answer file.

      • Dev

        Geert, I tried that but it is giving an error. I found on the internet that if we are using packstack then we need to first create a volume group on a node and then we can install cinder. Although, If we are talking about a real production setup then cinder is one the critical component of openstack I believe. And the official openstack documents are complex to understand for cinder. If you can help!