Start Your Free Trial

How to model and test NFV deployments on AWS & Google Cloud

Author:
Advanced Enterprise Networking In AWS EC2 - A Hands On Guide
Hemed GurAry, CISSP and CISA, Amdocs
Hemed GurAry is a Cloud and Security architect with Amdocs. Hemed specializes in network and application architecture for Finance and Telcos, bringing experience as a PMO and a leading team member in high key projects. His ongoing passion is hacking new technologies.

Network Function Virtualization has taken the networking world by storm. It brings to the table many benefits such as cost savings, network programmability and standardization to name a few.

Ravello with its nested virtualization, software defined networking overlay and an easy to use ‘drag and drop’ platform offers a quick way to set up these environments. With Ravello being a cloud based platform, it is available on-demand and opens up the opportunity to build sophisticated deployments without having to invest time and money to create a NFVI from scratch.

This three-part series blog post will walk you through the instructions on how to build using Ravello a complete NFV deployment with a working vFW service chain onboard. The deployment will based on Juniper Contrail and Openstack comprising of three nodes. We will start with this part by installing and configuring the NFV setup.

Deployment Architecture

VMs

Start with three empty virtual servers, each server has the following properties: 4 CPU’s, 32GB of Memory, 128GB of Storage and one network interface.

Deployment Architecture - VMs

Note: It’s important to define a hostname and use static IP’s for each server to preserve the setup’s state.

Software

The following software packages are used in this tutorial:

  • Ubuntu Precise Pangolin Minimal Server 12.04.3
  • Juniper Contrail release 2.01 build 41 + Openstack Icehouse
  • Cirros 0.3.4

Network

The three virtual servers running on Ravello are connected to our underlay network, CIDR: 10.0.0.0/24.

Three overlay networks were configured on our contrail WebUI:

  • Management – 192.168.100.0/24
  • Left – 10.16.1.0/24
  • Right – 10.26.1.0/24

image03

Configuration Steps

Below are step-by-step instructions on how to configure the setup:

  1. Setup VM’s and install the operating system
  2. Download Contrail packages and install controller node
  3. Fabric testbed.py population
  4. Install packages on compute nodes and provision setup
  5. Setup self-test

Step 1: Setup VM’s and install the operating system

We will start with configuring the Ravello application, setup the VM’s and install the operating system on each VM. This guide will focus on elements specific to Contrail so if you don’t know on how to build a Ravello application please refer to Ravello User Guide first.

It is also assumed you are able to install Ubuntu on the servers, either by installing or by using a preconfigured image. We installed an empty Ubuntu 12.04.3 on an empty Ravello image and then reused a snapshot.

The following properties are the same for all the VM’s.

  • CPUs: 4
  • Mem Size: 32GB
  • Display: VMware SVGA
  • Allow Nested Virtualization: Yes
  • Disk: hda
  • >Disk Size: 128GB
  • Controller: VirtIO
  • Network Name: eth0
  • Network Device: VirtIO
  • User: root
  • Password: Adm1n2

These are individual properties of the three VM’s:

HostIPSupplied servicesRole
CP9910.0.0.4022, 8080, 80, 443, 8143Controller
compute110.0.0.4122Compute node
compute210.0.0.4222Compute node
  1. Setup the three servers and install the operating system with OpenSSH role.
  2. Update /etc/hostname file with server’s hostname
  3. Update /etc/hosts file to contain the following
    127.0.0.1	localhost	<Hostname, i.e. CP99>
    10.0.0.40	CP99
    10.0.0.41	compute1
    10.0.0.42	compute2
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
  4. Update /etc/network/interfaces file
    # The primary network interface
    auto eth0
    iface eth0 inet static
    address <Server’s IP>
    netmask 255.255.255.0
    gateway 10.0.0.1
    dns-nameservers 8.8.8.8 8.8.4.4
  5. Last, validate the installation by going over the following checklist:
    • Validate SSH connectivity from your workstation
    • Validate that all of the servers are time synced.
    • Validate all servers can ping from one to another(Use hostnames to validate host names are resolvable)
    • Validate All servers can ssh and scp between one another.

Step 2: Download Contrail packages and install controller node

There are three methods to get the Contrail packages:

  • Build Open Contrail packages from source
  • Download pre-built Open Contrail packages
  • Download pre-built Contrail packages

For our guide we will use the later.

Note: This procedure is specific to installing contrail 2.0X on Ubuntu 12.04.3 and includes a kernel upgrade to kernel 3.13.0-34.

  1. Head on to Contrail’s download page and download the application package
    contrail-install-packages_2.01-41-icehouse_all.deb

    Contrail’s download page

  2. Copy the application package file to CP99 /tmp/ folder
    scp /tmp/contrail-install-packages_2.01-41-icehouse_all.deb root@<CP99 public IP>:/tmp
  3. SSH to CP99 and install the package
    dpkg -i /tmp/contrail-install-packages_2.01-41-icehouse_all.deb
  4. Run the following command to create a local Contrail repository and fabric utilities at /opt/contrail/
    cd /opt/contrail/contrail_packages;   ./setup.sh

Step 3: Fabric testbed.py population

Create a Fabric’s testbed.py file with the relevant configuration:

  1. Create testbed.py using nano editor
    nano /opt/contrail/utils/fabfile/testbeds/testbed.py
  2. Paste the following block of text to the nano editor and save the file
    from fabric.api import env
    #Management ip addresses of hosts in the cluster
    host1 = 'root@10.0.0.40'
    host2 = 'root@10.0.0.41'
    host3 = 'root@10.0.0.42'
    #External routers if any
    #for eg. 
    #ext_routers = [('mx1', '10.204.216.253')]
    ext_routers = []
    
    #Autonomous system number
    #router_asn = 64512
    router_asn = 64512
    
    #Host from which the fab commands are triggered to install and provision
    host_build = 'root@10.0.0.40'
    
    #Role definition of the hosts.
    env.roledefs = {
        'all': [host1, host2, host3],
        'cfgm': [host1],
        'openstack': [host1],
        'control': [host1],
        'compute': [host2, host3],
        'collector': [host1],
        'webui': [host1],
        'database': [host1],
        'build': [host_build],
    }
    
    #Openstack admin password
    env.openstack_admin_password = 'Adm1n2'
    
    #Hostnames
    env.hostnames = {
        'all': ['CP99', 'compute1', 'compute2']
    }
    
    env.password = 'Adm1n2'
    #Passwords of each host
    env.passwords = {
        host1: 'Adm1n2',
        host2: 'Adm1n2',
        host3: 'Adm1n2',
    
        host_build: 'Adm1n2',
    }
    
    #For reimage purpose
    env.ostypes = {
        host1:'ubuntu',
    	host2:'ubuntu',
    	host3:'ubuntu',
    }

Step 4: Install packages on compute nodes and provision setup

Use fabric to install packages on compute nodes, upgrade the linux kernel and provision the whole cluster:

  1. Issue the following command from the controller
    /opt/contrail/utils; fab install_pkg_all:/tmp/contrail-install-packages_2.01-41-icehouse_all.deb
  2. Upgrade Ubuntu kernel
    fab upgrade_kernel_all
  3. Perform installation
    fab install_contrail
  4. Provision cluster
    fab setup_all

Step 5: Setup self-test

To finalize part one we will perfom three methods to test the health of our new setup:

  • Contrail’s status commands
  • Horizon login test
  • Contrail webgui monitor
  1. To get Contrail’s status run the following command from the controller: contrail-status

    Note: Allow up to 10 minutes for the whole system to spin up
    Expected output:

    == Contrail Control ==
    supervisor-control:           active
    contrail-control              active
    contrail-control-nodemgr      active
    contrail-dns                  active
    contrail-named                active
    
    == Contrail Analytics ==
    supervisor-analytics:         active
    contrail-analytics-api        active
    contrail-analytics-nodemgr    active
    contrail-collector            active
    contrail-query-engine         active
    
    == Contrail Config ==
    supervisor-config:            active
    contrail-api:0                active
    contrail-config-nodemgr       active
    contrail-discovery:0          active
    contrail-schema               active
    contrail-svc-monitor          active
    ifmap                         active
    
    == Contrail Web UI ==
    supervisor-webui:             active
    contrail-webui                active
    contrail-webui-middleware     active
    
    == Contrail Database ==
    supervisor-database:          active
    contrail-database             active
    contrail-database-nodemgr     active
    
    == Contrail Support Services ==
    supervisor-support-service:   active
    rabbitmq-server               active
  2. Next run the openstack-status command
    openstack-status
    Expected output:
    == Nova services ==
    openstack-nova-api:           active
    openstack-nova-compute:       inactive (disabled on boot)
    openstack-nova-network:       inactive (disabled on boot)
    openstack-nova-scheduler:     active
    openstack-nova-volume:        inactive (disabled on boot)
    openstack-nova-conductor:     active
    == Glance services ==
    openstack-glance-api:         active
    openstack-glance-registry:    active
    == Keystone service ==
    openstack-keystone:           active
    == Cinder services ==
    openstack-cinder-api:         active
    openstack-cinder-scheduler:   active
    openstack-cinder-volume:      inactive (disabled on boot)
    == Support services ==
    mysql:                        inactive (disabled on boot)
    rabbitmq-server:              active
    memcached:                    inactive (disabled on boot)
    == Keystone users ==
    Warning keystonerc not sourced
  3. Login to horizon by browsing to the following URL
    http://<Controller’s Elastic IP>/horizon
    1. Use the credentials we set earlier in testbed.py: u:admin/p:Adm1n2
      image05
  4. Log in to Contrail’s webgui by browsing to the following URL
    http://<Controller’s Elastic IP>:8080
    1. Use the credentials we set earlier in testbed.py: u:admin/p:Adm1n2
      image04
    2. Review monitor to check for system alerts

Summary

At this stage you should have a working multi node setup of contrail with two compute nodes where you can experience many if not most of Contrail’s functionality. Tune in for the next blog posts explaining:

  • How to functionally test the setup
  • How to install a simple gateway
  • How to configure a vFW service chain

I would like to thank Igor Shakhman from my group in Amdocs and Chen Nisnkorn from Ravello Systems for collaborating with me on this project.

About Ravello Systems

Ravello is the industry’s leading nested virtualization and software-defined networking SaaS. It enables enterprises to create cloud-based development, test, UAT, integration and staging environments by automatically cloning their VMware-based applications in AWS. Ravello is built by the same team that developed the KVM hypervisor in Linux.

Check our product demo video

How to model and test NFV deployments on AWS & Google Cloud