This blog describes how to work with OpenStack networking on Ravello Systems, which enables OpenStack lab environments in the cloud. When configuring networks in an OpenStack environment on Ravello, you are essentially setting up nested KVM and overlay networks. Ravello’s…
In this blog, we will describe the process of setting up an All-in-One fully functional environment with the latest upstream release of OpenStack Liberty on public cloud. This eliminates the need to have physical hardware and gives capability to build environments that can scale up for testing, demo, training purposes. We have built the environment in Ravello Systems and saved it as a blueprint. Ravello Systems nested virtualization capability enables setup of nested KVM environments required for running OpenStack on AWS and Google Cloud.
Welcome to the final part of my OpenStack series on constructing and scaling OpenStack models in AWS / Google Cloud via Ravello (we started several weeks ago with the one-click blueprint referenced here).
In this entry we will be wrapping up by installing ceph and configuring cinder and nova to use it as backing for volumes. We’re using ceph here as a distributed, highly resilient object store and more specifically using rbd (rados block device) to back those volumes(basically virtual machine disks).
Building an Openstack lab from scratch with PackStack on AWS and Google Cloud – Installing OpenStack via Packstack
Packstack is meant to be a really easy way to install OpenStack – and it is. It doesn’t do quite a few things you should probably do for a production instance and the config it produces has a tendency towards there being only one of a lot of things that should really be clustered or at least have some form of HA. But it works – for messing around it’s great. You get some relatively sane configs and setups that work which you can reference and mess with and it will also scale horizontally surprisingly far. The underlying puppet modules it uses are also pretty useful, and you can go in after it and fix its flaws.
Big Switch Labs – Running self-service, on-demand VMware vCenter/ESX and OpenStack based Open SDN Fabric demo environments in AWS and Google Cloud
At Big Switch Networks, we are taking key hyperscale data center networking design principles and applying them to fit-for-purpose products for enterprises, cloud providers and service providers. Our Open SDN Fabric products built using bare metal switching hardware and centralized controller software, deliver the simplicity and agility required to run a modern a data center network. Through seamless integration and automation with VMware (vSphere/NSX) and OpenStack cloud management platforms, virtualization and networking teams are now able to achieve 10X operational efficiencies compared to the legacy operating models.
In this blog, we will describe how to scale up the blueprint described in this entry.
The easiest way to scale is to up the amount of RAM and CPU reserved for each of the compute nodes when copying the blueprint, however there is a limit to how far this will take you, so this week we will be going over how to add additional compute nodes to the OpenStack blueprint.
In this blog, we will describe step by step instructions to build a multi node Openstack lab with packstack, that you can run on AWS and Google Cloud. You can build and run these labs on Ravello Systems. Ravello Systems platform makes AWS and Google look like real hardware. Ravello technology consists of a high performance nested virtualization engine and an overlay network technology that enables developers, ISVs and enterprises deploying OpenStack in their data centers to run development, testing, staging and upgrade testing environments in AWS or Google cloud with KVM hardware acceleration.
Multi Node Openstack Kilo lab on AWS and Google Cloud with Externally Accessible Guest Workload – How to configure Openstack networking on Ravello Systems Part 1
Last week we went into how to prep an image for Ravello/AWS/Google/ESXi. This week we’re going to leapfrog ahead a bit and talk about networking and OpenStack.
OpenStack is highly complicated for a number of reasons, chief amongst them is that what it seeks to do is replace a bunch of highly complex silos. Second, but not far behind, is that it does this via a collection of independently developed microservices.
OpenStack Kilo Blueprint
Now that Kilo has had a bit of soak time and with the next release of Red Hat OpenStack Platform to be based on it I thought it time to revisit OpenStack. Using the same methods as the Juno installation from my previous blog entry, I set up Kilo running on CentOS 7 using the RDO Packstack based release. The blueprint is now available on Ravello Repo, ready for you to kick the tires. The answers file lives in /root/answers.txt on the controller node. Copy the blueprint to your account and go nuts. The VMs have cloud-init so you will need your SSH keypair. The default user for SSH with the keypair is centos. Password for the root user and the OpenStack users admin and demo is ravellosystems. Once the instance is deployed the Horizon UI is available at https://PUBLIC.IP.OF.CONTROLLER from any modern browser. Just accept the self signed certificate at the warning screen.
Mirantis OpenStack(MOS) is a hardened OpenStack distribution with the Fuel deployment orchestrator. It uses PXE boot to setup other nodes in the OpenStack Cloud, thus making it very easy to quickly setup a multi-node OpenStack environment. Most traditional Mirantis OpenStack deployments are done on bare metal where there is support for PXE, full access to layer 2 networking, hardware acceleration support, etc. However, that requires capex investment in physical hardware and longer lead time to get everything provisioned. I approached Ravello to leverage their technology to setup Mirantis OpenStack on public cloud, so I could overcome these challenges and it’s been great to partner with them.
ManageIQ is an open source cloud management platform. It implements features like chargeback, governance, security policies, orchestration and self-service on top of various virtualization solutions, private clouds, and public clouds. ManageIQ is the open source project on which the commercial Red Hat CloudForms product is built.
This entry has been in the works for a while and now it comes to fruition. These days OpenStack is the buzzword. Companies are adopting it, testing it, and finding new ways to use it at astounding rates. The project is moving constantly, with a six month cycle between major releases. Keeping up with it can be daunting. I say this as someone who has spent the last year up to my eyeballs in OpenStack. I teach both the Red Hat and Linux Foundation versions of the OpenStack training.
The SUSE Openstack Cloud distribution is amongst one of the leading Openstack platforms. With any Openstack KVM setup, you need hardware to deploy it and a lot of time to setup and configure the environment to get it up and running.
In this blog I want to talk about testing and running OpenStack lab environments and how Ravello technology allows you to create such labs in a matter of minutes by utilizing the power of public clouds. More specifically I’ll show you how we started on a project to build out a 500 node fully functional Redhat RDO distribution. The final setup had 100 compute nodes behind one controller node and included in total more than 400 CPUs and 1.6 Tb of RAM. If you think about it, it’s probably one of the largest OpenStack deployments ever, running entirely on AWS. Disclaimer: The project was done jointly with RedHat developers. You can read more about it here.
Version 2.3, 2015-01-29
A distributed OpenStack installation with 100 Nova compute nodes
This blog will cover my experience with scaling Redhat’s Enterprise Linux OpenStack platform to 100 compute nodes behind a single controller and dedicated neutron networking node on the Ravello Cloud.
Someone famously said “You don’t learn to walk by reading a book. You learn by doing, and by falling over.” And this is very much true when it comes to technical learning. I still remember my first few programming classes back in my engineering days – I read some chapters in the book, I listened to every word my professor said, but nothing could compare to the learning that came from writing and compiling my first few pieces of code. Fast forward many years later to my days working at HP’s storage division and it wasn’t until I physically installed some storage arrays and switches and configured my own SAN in the data center that I deeply understood the how and why of zoning hosts, exposing LUNs and managing a SAN.
As part of Ravello’s Learning Tracks, this guide provides resources with step by step instructions for learning OpenStack in your own time, but without requiring any hardware. We will achieve this by using a cloud-based lab running on Ravello. If you don’t already have a Ravello account you can use a two week free trial to get started.
In my previous blog, I talked about how one can bring up OpenStack Icehouse in less than 5 minutes. Here comes OpenStack Juno, the tenth release – and you can try it out immediately by having your own lab on AWS by using Ravello’s nested hypervisor which lets you run nested KVM on AWS. This blog will explain how to setup your own lab for trying Juno and you can always email me if you want me to share my Juno blueprint with you (for an instant deployment).
If you are a software ISV who has to port your products to run on Openstack, then you are probably living through the challenges of not having enough Openstack test environments for your development and testing team members. Openstack multi-node deployment is not very easy and now imagine having to do this deployment multiple times and maintain environments for your product development efforts. There is also the issue of not having enough hardware for Openstack test environments.
Until today, it was not possible to run hypervisors (such as KVM, ESXi etc.) in cloud environments, as cloud providers do not expose the instruction set needed for hypervisors to run (see here). However, recently, we announced support for KVM in AWS – using Ravello’s nested virtualization technology.