Isilon OneFS is a scale out NAS storage solution from EMC and uses intelligent software to scale data across vast quantities of commodity hardware. It replaces the three layers of traditional storage model – file system, volume manager and data protection and provides a unified clustered file system with built-in scalable data protection, and obviating the need for volume management. EMC makes available for download, the Isilon OneFS 22.214.171.124 Simulator at no charge for non-production use. In order to install it in a real data center like environment so as to get a feel of user interface and administrative tasks requires ESXi infrastructure. Now, one could invest in hardware and setup a multi host ESXi lab environment to install the EMC Isilon OneFS simulator. The other alternative is to leverage the public clouds like AWS and Google Cloud infrastructure to setup ESXi lab and install and configure EMC Isilon OneFS simulator modules.
AWS and Google Cloud natively do not support nested virtualization, but Ravello platform HVX(runing on top of AWS and Google Cloud) implements Intel VT/ AMDV (including NPT nested pages support) in software which enables users to run ESXi with hardware acceleration in AWS or Google.
This enables you to build complex and large ESXi lab setup which mimics your data center and run and test EMC Isilon simulator components on top of it. The entire lab can be provisioned on-demand, is available across the world and you pay for only when you use it.
In this blog we will cover installing and configuring an EMC Isilon OneFS Simulator 3 node cluster as a second level guest running on a Nested Ravello ESXi hosts managed by VMware vSphere 6.0.
Once we have the first Isilon node deployed within vSphere, we will save it as a vCenter template so that additional nodes are much easier to add to the Isilon Cluster.
- A Ravello account. If you don’t already have one, you can start your free trial.
- A working VMware vSphere 5.5 or 6.0 environment running on your Ravello Cloud. More information can be found in this post.
- One or more ESXi nodes setup with access to external network as found in this post.
Virtual infrastructure Requirements
- VMware ESXi 5.5.x or 6.0
- VMware vCenter 6.0 (Windows or Linux)
- VMware vCenter Converter Standalone 5.5 or 6.0
- 17 GB of free space for the first node
- 16 GB per additional node
- Windows Server Jump host within the Ravello environment/application
High level overview of the steps involved
- Download the EMC Isilon VM from the EMC trialware site. (An EMC Support account is not required)
- Deploy the VM using VMware vCenter Converter Standalone
- Convert the VM for use with VMware vSphere
- Deploy the VM to vCenter Server
- Save the VM as a vCenter template and/or to the vSphere Library (for vCenter 6.0 only)
- Configure the first node in the cluster
- Configure subsequent nodes in the cluster
VMware vSphere 6.0a
- ESXi 6.0a
- vCenter Server for Windows 6.0a
Each ESXi node requires 3 NICs:
- 1x External Ravello Network
- 1x Internal vCenter Only Private Network
- 1x Bridged Internal Private Network to External Ravello Network
Cluster Name: ISI-01
Reserve a min of 3 IPs for each network
Internal Only Isilon Network
- Int-A Low 192.168.0.101 NIC1
- Int-A High 192.168.0.103 NIC1
- Netmask: 255.255.255.0
- Ext-1 Low 126.96.36.199 NIC2
- Ext-1 High 188.8.131.52 NIC2
- Netmask: 255.255.0.0
- Gateway: 184.108.40.206
For vSphere networking, in addition to the vswitch configuration found in this post.
Add an additional vswitch to each node for the internal Isilon network.
Each node must be configured with local storage. If you have followed the instructions in this post, there should be a 100GB local disk attached to your ESXi node. Within the vSphere Client, add this disk to each ESXi node.
1. Download the EMC Isilon OneFS Simulator
Download the EMC Isilon OneFS Simulator from EMC’s trialware site.
2. Deploy the Isilon VM using VMware vCenter Converter Standalone
Using the Converter Standalone version that matches your vSphere environment.
Follow the step by step instructions from the “Running virtual nodes on ESX” section of the Virtual Isilon Install Guide PDF available in the Simulator download from EMC’s trailware site.
More information about VMware vCenter Converter can be found on VMware’s Pubs Site.
Ravello specific configuration
Before you save as a template or power on the VM set the Typematic Rate to 2000000:
keyboard.typematicMinDelay = 2000000
3. Save the VM as a vCenter template and/or to the vSphere Library (vCenter 6.0 only)
4. Configure the first node in the cluster
Follow the step by step instructions from the “Install the virtual Isilon cluster” section of the Virtual Isilon Install Guide PDF available in the Simulator download from EMC’s trailware site.
Ravello specific configuration
Deploy the Isilon template to the local storage of the first ESXi node
5. Configure subsequent nodes in the cluster
Follow the step by step instructions from the “Add the rest of the nodes to the cluster” section of the“Virtual Isilon Install Guide.pdf” available in the Simulator download from EMC’s trailware site.
Ravello specific configuration
Deploy the Isilon template the local storage of the remaining ESXi nodes
Accessing the management console
To access the management console, open a browser to https:\\<Ext Network IP of node 1>\
Make sure that for Islon guest VM running on top of ESXi in Ravello, The sum total guest CPUs should be =< ESXi host CPUs and same for memory.We have not tested this setup for heavy duty file scanning and other Isilon functional tests. This setup is meant to be used for user trial, demo, training etc. In order to setup a lab with more intensive EMC Isilon OneFS functional operations, would need performance optimizations on underlying ESXi hosts running in Ravello.
About Ravello Systems
Ravello is the industry’s leading nested virtualization and software-defined networking SaaS. It enables enterprises to create cloud-based development, test, UAT, integration and staging environments by automatically cloning their VMware-based applications in AWS. Ravello is built by the same team that developed the KVM hypervisor in Linux.