Friday 30 December 2016

UIXP Network development report for 2016/17

UIXP is currently involved in an upgrade of its core network. This blog entry serves as a short technical report on the need for the change, the new network design as well as a current progress report.

Where is the exchange coming from?


Illustration 1: UIXP Network prior to the upgrade

In the past the network was built around a pair of HP ProCurve 3400CL switches. These switches offer 4 dual-personality ports - each port can be used as either an RJ-45 10/100/1000 copper port or an open mini-GBIC slot for fibre based transceivers plus 20 auto-sensing 10/100/1000 ports. The network was operated as a flat switched network with no separation of traffic types. Services on a Dell PowerEdge 750 Server were connected via a HP ProCurve 2524 100 Mb/s switch in the core which was interconnected with the peering switches via a 100 Mb/s CAT5e copper cable.

What is the motivation to upgrade?

With the addition of the Akamai Content Delivery Network (CDN) cache to the exchange and two Google caches located on member networks but accessible through the exchange it became necessary to re-look at network design as traffic levels rose significantly.

Physical network 

Illustration 2: UIXP - Physical layout
One of the limiting factors of the old model was the interconnect between the switches. This was a single physical IEEE 802.3z Type 1000Base-X giving a 1 Gb/s trunk. The distribution of members between the switches meant the Ethernet bundle came towards the limits of its bandwidth capacity.

The first thing necessary was a complete rebuild of the Core and Peering elements of the network and a separation of these functions to separate cabinets. A Juniper EX4300 was donated by the Uganda Communications Commission (UCC) as the peering access switch. With over 4 times the throughput of the HP ProCurve 3400CL switches, 4 Small Form Factor Pluggable plus (SFP+) ports that support IEEE 802.3ae Type 10GBASE-X as well as 48 port IEEE 802.3ab 1000Base-T for member peers. This switch is placed as a Top of Rack (ToR) switch in the Peering cabinet facilitating interconnection by the eXchange members as either 10, 100 or 1000 Mb/s. The Virtual Chassis configuration feature of the EX4300 is attractive given the potential to connect a second such switch in the future in the second peering cabinet.

The main core switch is a Cisco Nexus 3548 that was donated to the exchange by Packet Clearing House (PCH) with 48 fixed SFP+ ports IEEE 802.3ae Type 10GBASE-X, 10GBASE-CU SFP+ with Twinax Direct Attach Cables (DAC). Lower speeds are supported via Gigabit Line Card (GLC) SFPs for both Fibre and Copper 1 Gb/s interfaces configurable to lower speeds where necessary. This switch is interconnected to the Juniper peering switch with a 10 GB/s link configured as a Virtual Local Access Network (VLAN) trunk. The core switch has currently the HP ProCurve 2524 connected to cater for lower speed interfaces within the core network which reduces the number of SFPs necessary in the Cisco Nexus. This switch will be replaced by one of the HP ProCurve 3400CL switches once the members begin to migrate to the new peering cabinets.

The Akamai CDN Cache connects to the Core switch via a 10 Gb/s fibre interface while the Proxmox cluster nodes each connect via 1 Gb/s copper interfaces.

The old Cisco 3500 router has given way for a less power hungry Cisco C2801 router in the new core cabinet. As this routers function is to facility the distribution of traffic between the internal UIXP networks and the Internet, the bandwidth requirement is actually quite small and the C2801 is quite adequate for the function.

Infrastructure as a Service (IaaS) platform

To deliver core services it was necessary to build a robust Hypervisor based Infrastructure as a Service (IaaS) that could support the orchestration of both Virtual Machines (VM) and Containers (CT) to support the functions required at the eXchange.

The selection criteria for the hypervisor platform considered the need for it to be a Free and Open Source (FOSS) platform that supports High Availability (HA) as well as both VMs and CTs. The options explored were OpenStack and Proxmox. Both met the requirements of HA and IaaS. OpenStack is released under a FOSS Apache License, while Proxmox is licensed under the GNU is Not Unix (GNU) Affero General Public License (AGPL) version 3, so both are FOSS.

OpenStack however was considered more suitable for a Service Provider wishing to provide cloud services to end customers. This is not a requirement for the exchange and addes significant complexity. While the Proxmox Virtual Environment (VE) is not as fully featured as OpenStack it is powerful and simpler to deploy and use with all the features required by the eXchange.

Proxmox is Debian GNU/Linux based and uses robust Kernel Virtual Machine (KVM) technology and LinuX Containers (LXC). A major plus of Proxmox is the HA Cluster features. When VM or CT instances are configured as HA and the physical host fails, the virtual instance is automatically restarted on the remaining Proxmox VE Cluster nodes. It was considered that the Proxmox VE HA Cluster is based on proven GNU/Linux HA technologies and would provide the stable and reliable HA service required.

Initially the Proxmox cluster consists of the Dell PowerEdge 750 and an old Dell Server, however thanks to a upcoming donation from the Internet Society of an additional Dell PowerEdge 750 it will be possible to upgrade the Proxmox cluster hardware. This VE cluster is an essential element of the exchange and hosts the various Virtual Network Functions (VNF) and Server instances as either VMs or CTs.

Logical network 

Illustration 3: UIXP Logical network design

Considering a number of items, the need to separate traffic types and information/network security to name but a few it was decided to split the network into logical elements, a peering Local Access Network (LAN) to contain the member peering interfaces as well as the Root Servers (RS) and the Autonomous System 112 (AS112) Nameserver. A private management LAN for intercommunication between the functions and a DeMilitarised Zone (DMZ) LAN to permit controlled access to the various networking devices, VMs and CTs.

Current state

Well most of the physical network elements are already in place and we await the migration of the peers to the new peering cabinets. The Proxmox cluster is in place and will be beefed up by the addition of the second Dell PowerEdge 750 and it supports the core services that are built on VMs and containers. Once that is complete the work of separating the LAN into the logical elements just described will begin. Looking forward to a busy 2017.

Abbreviations

AGPL Affero General Public License
AS112 Autonomous System 112
CDN Content Delivery Network
CT Containers
DMZ DeMilitarised Zone
FOSS Free and Open Source
GLC Gigabit Line Card
GNU GNU is Not Unix
HA High Availability
IaaS Infrastructure as a Service
KVM Kernel Virtual Machine
LAN Local Access Network
LXC LinuX Containers
PCH Packet Clearing House
RS Root Servers
SFP+ Small Form Factor Pluggable plus
ToR Top of Rack
UCC Uganda Communications Commission
VE Virtual Environment
VLAN Virtual Local Access Network
VM Virtual Machines
VNF Virtual Network Functions

Bibliography

Packet Clearing House. Available: https://www.pch.net
Uganda Communications Commission. Available: http://www.ucc.co.ug
The Internet Society. Available: http://www.internetsociety.org
Proxmox Server Solutions GmbH. Available: https://www.proxmox.com/en/
Akamai Technologies. Available: https://www.akamai.com
Cisco Nexus 3548 Switch. Available: http://www.cisco.com/c/en/us/products/switches/nexus-3548-switch
Juniper EX4300 Switch. Available: http://www.juniper.net/uk/en/products-services/switching/ex-series/ex4300
HP ProCurve 3400CL. Available: http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c01809608
HP ProCurve 2524. Available: http://www.hp.com/ecomcat/hpcatalog/specs/J4813A.htm

Dell PowerEdge 750. Available: http://www.dell.com/downloads/global/products/pedge/en/750_specs.pdf

No comments:

Post a Comment