Testbed

In this Section, we describe how we implemented the testbed used to carry out the experiments reported in this work.

The testbed was initially deployed on a Dell server with an Intel i7 processor, 16 GB of DDR3 RAM, and a 1 TB secondary HDD storage unit. We use VMWare ESXi 4 as a hypervisor to virtualize the machines where we install mobile client emulators, four slices of ContextNet, and an instance of InterSCity. Despite the low processing, memory, and storage capacity, it was possible to migrate ContextNet, from its Data Center version to the distributed version. More details on how to deploy ContextNet are available on the LAC Wiki.

After implementing the distributed version of ContextNet was approved, we migrated the entire system to the virtual machines located in the cloud of the Informatics Department of PUC-Rio (Cloud-DI). Unlike other hypervisor systems, the Cloud-DI hypervisor pre-allocates all the virtual machine's resources so that experiments are being carried out simultaneously by other teams. The platform consists of 16 hosts with 336 vProcs, 1120 GB of main memory, and 7 TB of storage. Communication between hosts is done via a 48-port Gigabyte Ethernet switch. KVM from Qumranet, now Red Hat, is used as a hypervisor.

The initial experiments were carried out with four virtual machines to virtualize four slices of ContextNet, one VM to emulate mobile clients and one VM to operate as a router connecting the other virtual machines via the network. The router, in addition to controlling the flow of data between the virtual machines, allows selective access to the Internet allowing access via SSH to the virtual machines while creating a controlled environment free of extra traffic inherent to systems connected to the Internet. The Figure shows the configuration used in the first experiments. Subsequently, we could add six more virtual machines to the initial set, allowing an expansion in the scalability test of the MUSANet architecture.

Testbed Architecture

We choose Ubuntu as the operating system to be used on all virtual machines, including the gateway. It is a free and open-source operating system with no license limitations for commercial use. On the other hand, the operating system chosen should not impact the deployment of the testbed or the implementation platform because everything that was implemented in the tests is compatible with most Linux distributions available on the market.

To emulate the behavior of real networks, we choose the TC application that allows controlling traffic on the network, configuring the bandwidth per interface, the delay in packet delivery, and even the percentage of packets lost. The developer has a series of tools to implement this task like Wondershaper.

Another important tool that the developer can use to monitor the application's behavior over time is the SNMP RFC~1157. The installation of MRTG exemplified in the description of the OpenALPRSample application allows configuring the gateway (VM005) as an SNMP management station. The data can be viewed through web pages over the Internet using an Apache server, as shown in Figure.


Example of data visualization using MRTG and SNMP.