Financial institutions and enterprises require flexible network security architecture to accommodate external network devices/servers in their DC/colo facilities. This article provides a way to design and implement such a network security architecture using Border Gateway Protocol (BGP) + VXLAN tunnels along with VM-series firewall from Palo Alto Networks. Ravello Network Smart Labs provides an easy way to test and deploy an architecture before moving it to the enterprise infrastructure.
The following article demonstrates the VXLAN routing feature implemented to transport packets from one tunnel endpoint to another. It’s functionality is based on BGP extensions and Virtual Routing and Forwarding (VRF) technologies. The feature is useful in scenarios when you have 3rd party colocated equipment requiring firewall scrubbing before transmission to final destination. Traditionally, 3rd party vendor equipment may need to be separated into a dedicated rack and physically wired. VXLAN and BGP overlay designs offer better flexibility when designing connections to security appliances. It enables equipment to be physically placed in any rack and tunnelled to the appropriate device.
The core architecture is based on a leaf and spine consisting of spines, leafs and DCi switches based on Arista vEOS. All end stations are ubuntu hosts. The security services represented in the top half of the diagram with Red and Blue networks are separated by a Palo Alto Firewall. The firewalls connect to the core with two datacenter interconnect devices.
The Blue network operates normally and all east west traffic goes direct to its destination without any scrubbing. However, all Red traffic gets forwarded directly via the VXLAN overlay to the Palo Alto firewall for scrubbing. Traffic to the Palo Alto is carried with a VXLAN encapsulated tunnel across the spine nodes.
The diagram displays the canvas from the Palo Alto Networks/ Arista vEOS blueprint. The PA are setup for web based management and CLI access. Just point your browser to the Ravello IP or DNS name. All of the vEOS nodes and ubuntu hosts are reachable via mgmt1. Ravello’s DNS service is automatically configured enabling SSH to device host name.
The bottom half of the blueprint contains a number of leaf switches, labelled L1, L2, and L3. Each of these leaf switches connect into a virtual rack for testing. All virtual racks are the same except for the one connected to L2. L2 contains both a blue and red host. The red network requires scrubbing.
IP & Interface Configuration
Mgmt – 192.168.0.12
Loop – 192.168.0.12
|Eth1 – IP 172.16.2.3/31||eBGP sessions to the connecting spine|
|Eth2 – IP 172.16.2.13/31||eBGP sessions to the connecting spine|
VTEP endpoints : 172.16.0.13 172.16.0.15 172.16.0.16
Mgmt – 192.168.0.13
Loop – 172.16.0.13
|Eth1 – IP 172.16.2.5/31||eBGP sessions to the connecting spine|
|Eth2 – IP 172.16.2.15/31||eBGP sessions to the connecting spine|
Mgmt – 192.168.0.14
Loop – 192.168.0.14
|Eth1 – IP 172.16.2.7/31||eBGP sessions to the connecting spine|
|Eth2 – IP 172.16.2.17/31||eBGP sessions to the connecting spine|
Mgmt – 192.168.0.10
Loop – 172.16.0.10
|Eth1 – IP 172.16.2.2/31||eBGP sessions to the connecting L1|
|Eth2 – 172.16.2.4/31||eBGP sessions to the connecting L2|
|Eth3 – IP 172.16.2.6/31||eBGP sessions to the connecting L3|
|Eth4 – P 172.16.2.8/31 >||eBGP sessions to the connecting DC1|
|Eth5 – IP 172.16.2.10/31||eBGP sessions to the connecting DC2|
Mgmt – 192.168.0.11
Loop – 172.16.0.11
|Eth1 – IP 172.16.2.12/31||eBGP sessions to the connecting L1|
|Eth2 – IP 172.16.2.14/31||eBGP sessions to the connecting L2|
|Eth3 – IP 172.16.2.16/31||eBGP sessions to the connecting L3|
|Eth4 – IP 172.16.2.18/31||eBGP sessions to the connecting DC1|
|Eth5 – IP 172.16.2.20/31||eBGP sessions to the connecting DC2|
VTEP endpoint to 172.16.0.13 172.16.0.15 172.16.0.16
Mgmt – 192.168.0.15
Loopback – 172.16.0.15
|Eth1 – IP 172.16.2.9/31||eBGP sessions to the connecting S1|
|Eth2 – IP 172.16.2.19/31||eBGP sessions to the connecting S2|
|Eth3 – 172.16.4.0/31||BLUE Network|
|Eth4 – 10.255.1.0/31||RED Network|
VTEP endpoint to 172.16.0.13 172.16.0.15 172.16.0.16
|Eth1 – IP 172.16.2.11/31||eBGP sessions to the connecting S1|
|Eth2 – IP 172.16.2.21/31||eBGP sessions to the connecting S2|
|Eth3 – IP 172.16.4.2/31||BLUE Network|
|Eth4 – IP 10.255.1.2/31||RED Network|
All Spine and Leaf configurations can be pulled from the following GitHub Account.
All leaf nodes run normal eBGP to the two Spine nodes. The leafs no not peer BGP with each other. The two DCi nodes also run eBGP to the Spine nodes. This forms the base of the leaf/spine underlay network, providing core reachability. The normal BGP sessions are depicted by the blue lines in the diagram below.
|Node||BGP ASN||BGP Type|
|Spine1||64512||eBGP to All Leafs and DCx|
|Spine2||64512||eBGP to All Leafs and DCx|
|Leaf1||64514||eBGP to both Spines|
|Leaf2||64515||eBGP to both Spines|
|Leaf3||64516||eBGP to both Spines|
|DC1||64517||eBGP to both Spines|
|DC2||64518||eBGP to both Spines|
There is another overlay network running on top of the current BGP underlay. It is based on VXLAN, represented by the 10.55.2.0/21 networks. Additional BGP sessions are created between L2, DC1 and DC2 within a newly created VRF. They are represented by the RED arrows on the diagram.
The overlay offers flexibility to where nodes can be placed, enhancing the security services design. The diagram below illustrates the high level logical map of the blueprint.
The following screenshot represents the BGP view from Spine1. It has 5 x eBGP peerings and all BGP states are “established” and learning routes from neighboring BGP peers. Maximum routes is set to 12000 and redistribute connected and static is configured.
Spine 2 will have similar configuration except for the endpoint IP addresses. The configuration of the Spines is really simple, standard BGP and IP address on the interfaces.
The VXLAN overlay is where the magic happens. We have a VRF named vxlan20 configured under SVI VLAN 20 that is mapped to VNI 20. The VXLAN flood list is set to 172.16.0.13, 172.16.0.15 and 172.16.0.16 relating to the correspoding VTEP endpoints DCI1, DCI2 and L2.
Within the BGP VRF configuration, the other BGP peers forming the overlay tunnel are explicitly set. From the perspective of Leaf 2, this will be DCI1 and DCI2.
The diagram below displays the BGP and VXLAN configuration for Leaf 2.
The following screenshot displays the VXLAN address table and the status of the vxlan interface. The VXLAN address table shows the remote MAC address learnt and the corresponding VTEP’s. Head End replication is used to forward BUM (Broadcast, Unknown Unicast, and Multicast) traffic. Previously, IP multicast is the control plane in VXLAN.
The Palo Alto firewalls are set with default configurations with static routing towards DC1 or DC2 respectively. They don’t share any state and traffic engineering towards each is done based on static routes with metrics assigned to DC1 and DC2, redistributed into BGP enabling global reachability.
For initial tests we run a PING from V1 on the RED network to a test loopback on Spine1. Enter Bash on the vEOS and run a tcpdump crabbing packets to the connecting Spine interfaces.
To prove that this runs through the firewall shut down Eth4 on DC1 (connecting interfaces to FW1). The pings will then fail.
For advanced configuration and deep packet inspection log in and tailor the security configuration as you see fit. Administration is carried out with the CLI or via the Palo Alto Networks GUI that is accessible from your web browsers.
Above highlights a network security architecture on how enterprises and financial institutions can integrate 3rd party servers and network equipment into their DC while keeping their network secure. It also demonstrates the flexibility of introducing these devices using VXLAN as the connection medium. Overlays offer a flexible approach to connecting endpoints, regardless of physical location. The underlay simply needs endpoint reachability.
If you are interested in trying this setup out, or building your own deployment please open a Ravello account, download the Palo Alto Networks VM-Series and copy over the configurations. Ravello’s nested virtualization and networking overlay helps one to create a high fidelity replica of deployment on cloud to model and test with couple of clicks making the on-prem deployment of this architecture easier. The blueprint can be found at this link – BGP + VXLAN Overlay.
About Ravello Systems
Ravello is the industry’s leading nested virtualization and software-defined networking SaaS. It enables enterprises to create cloud-based development, test, UAT, integration and staging environments by automatically cloning their VMware-based applications in AWS. Ravello is built by the same team that developed the KVM hypervisor in Linux.