In this blog I’m going to focus on how to extend your VMware vSphere on-premises datacenter to the public cloud. With Ravello, you can run ESXi nodes in AWS or Google Cloud, and easily connect it to your data center. Hence, you can spin up as many VMware ESXi nodes as you need, on demand, and simply pay for what you use. We call this, the InfinityDC. Both, the on-premises data center as well as the ESXi nodes running in AWS can be managed using the same VMware vCenter, providing for a seamless, scalable fabric.
Ravello’s nested virtualization and overlay networking technology allows for fast application development and testing by encapsulating entire application environments in cloud agnostic capsules. This capability makes it easy to quickly spin hundreds of versions of the capsules in the cloud, which is typical of a continuous integration setup. There is often times a need to connect the Ravello environment to another public cloud, or an on-premise private cloud servers, databases, repositories, etc. For example, in a continuous integration setup where the code repository is on premise, there needs to be a connection to the Ravello environment in the cloud via a secure tunnel.
The goal of this article is to showcase how to setup a secure VPN between two Ravello environments, one in AWS EC2 and one in Google Cloud. This setup mocks a scenario where, while one environment is running Ravello in either Google or AWS, the other could be an on-premise data center or customer’s VPC in AWS or some third party data center.
This guide will show you how to install the vCenter 6.0 Windows version in your nested ESXi lab on Ravello. This does not cover VCSA 6.0 – that’s coming next. If you haven’t yet installed vSphere we suggest you start with: How to create vSphere 6.0 image on Ravello
As most of you probably know besides implementing a hypervisor merely capable of running regular VMs, we’ve also implemented a CPU virtualization extensions called VT-I for Intel or SVM for AMD cpus. These extensions, in essence, allow running other hypervisors such as KVM or VMWare’s ESXi on top of Ravello. In this blog I’m going to focus on using DHCP for the 2nd level guests running on ESXi. This blog is optional in case you do not want to use only static IPs for 2nd level guests.
This blog was written with guidance from Scott Lowe around best practices of VMware data center design and automation.
As most of you are aware, we recently announced the public beta of a new feature that allows users to run VMware ESXi™ on AWS or Google cloud. Essentially, we have implemented Intel VT/ AMD-V functionality in software in our hypervisor, HVX. That makes the underlying cloud look like real x86 hardware – complete with silicon extensions required to run modern hypervisors like ESXi and KVM. In this blog, I am going to illustrate how to set up a large scale, 250-node VMware ESXi data center in AWS for less than $250/hr. We believe that this could be extremely useful for enterprises for upgrade testing their VMware vSphere™ environment or for new product and feature testing.