Skip to content

Hongik University, South Korea

sigcomm18hackathon edited this page Aug 23, 2018 · 1 revision

NEC: An Open Source Framework for Name Data Networking (NDN), Edge Computing, and Cloud Computing

Cloud computing was proposed where the computations and processing were performed in the cloud far from the end users. That was a great idea to offload all the task of users to the cloud and the clouds are very rich, however it falls short for the real time applications such as online gaming, augmented reality (AR), Virtual reality (VR) and Tactile Internet. Future Internet will result in fast information response time and low latency will be a main feature of such networks. Real time applications require a fast response time of less than 1ms. While in the cloud only model all the requests need to send to the cloud and the services will be returned to the users from cloud. Moreover, cloud only model suffers from a lot of traffic from end users. Due to which backhaul traffic create congestion on the cloud. This results in many limitations such energy and latency etc. To reduce burden on the backhaul, an architecture named as Edge Computing (EC) has been proposed recently. The main goal of EC is to bring resources, computations and power from the cloud layer to the edge layer. The edge then works as a mini cloud between users and the cloud and providing all the services which cloud provides. However, it is a hierarchal architecture, due to which some of the traffic will be sent to the cloud subject to the policies and requirements of given scenarios. Edge is not an isolated model and it must be connected with the cloud in order to get the services which the edge doesn’t have, or the edge may send his own traffic for further analysis to the cloud. The edge can be eNodeB, a server in a restaurant or coffee shop, and a device itself. The edge is totally based on host centric approach (TCP/IP). This model is not promising for future networks and may not deal with the intermittent connectivity and high mobile scenarios such as vehicular networks. Most of the networks today are mobile/wireless which need attention in the mobility prospective. In order to cope up with such issues, a futuristic Internet paradigm named as Named data networking (NDN) has been proposed. NDN is independent of host location and intrinsically supports consumer mobility. However, producer mobility is still under research. In NDN the names are used at the network layer and the content is requested via Interest for the content at the network layer. User no longer care about the location of the content where is resides. Moreover, in-network caching is supported at the network nodes and the content is cached on the node in the network based on caching policies. In addition, the content is secured itself instead of securing the pipe/channel. This connectionless NDN results in many advantages such as mobility, security, lower delay and in-network caching as mentioned before. However, there are limitations of each proposals and may not be deployed as a standalone system to fit in all scenarios. Therefore, researchers are exploiting the benefits of each paradigm by using them as an overlay. They all should work in a joint manner to benefit each other. In order to practically think of such networks to be implemented we need to follow an overlay method to use all of them in a proper way and at proper place to leverage each other and to fulfill the 5G and beyond requirements.

Key Idea and Proposal

In our work we are combing all the aforementioned promising architectures and propose a combined architecture that is comprised of Cloud Computing, Edge Computing and Named Data Networking. At the client side we are using NDN where consumer will request for data or services in their own network. Cloud will work as backhaul and will be placed far from end users. In the middle we have placed an Edge Computing device that will work as an edge of the network between NDN and the Cloud. Since we cannot ignore TCP/IP, therefore, edge and Cloud will be using IP approach (host oriented), while NDN will work on named based communication. There are many advantages of our proposed system which is defined as follows:

  1. The backhaul traffic will be reduced since the edge device will be avoiding traffic as much as possible. The traffic to the cloud will be sent in specific cases and the edge will provide cloud like services close to the end users.
  2. Latency will be reduced since most of the requests will be satisfied on the edge device instead of sending to the cloud.
  3. The most and innovative part of this framework is that we are providing a combination of simulation and testbed. This framework can be used by the people of cloud computing, edge computing and Named Data Networking. This framework has three parts. The NDN related strategies will be simulated in the NDN section of this framework. If there are some task that cannot be handled by the NDN, then it will be forwarded to the Edge device. The edge device will send back the computations/services to the end users subject to the policies of edge device. In addition, if edge is not able to perform computation on these requests then the edge will forward the requests to cloud, where further processing will take place. In the subsequent section we explain the complete architecture of our proposed framework. We provide a prototype implementation and the contribution is open from the community.

Architecture details

Our framework architecture comprising of three sections and is described as follows.

1. Named Data Networking (NDN)

In this section we are using the concepts of NDN where all the communication is content centric, and content are requested at the network layer using interest names. Without loss of generality, we used a simple scenario of three nodes (Consumers node, relay node and the provider) in ndnSIM (Official simulator for NDN). Note that provide will act as a gateway between Edge device and NDN whenever required. In this scenario, the consumer will request for content and or services from the relay node. If the relay node has the services / content, then it will send back to the user as the NDN architecture does. Otherwise it will be forwarded to the provider node. The provider node will perform processing on the received interest and will send data or the service back to the user. Now it is important to mention here that this node might not have the processing power or computation resources for some complex task request then what the producer will do? In that case, in order to avoid failure of request or non-availability, we will be switching from NDN network to IP approach at the producer. This producer (a gateway node running both NDN and IP protocols) will then act as IP based system and will send a request to the edge device with the use of Web Application Programming Interface (Web API). Here the Web APIs will connect the ndnSIM simulators with the edge computing device.

2. Edge Computing:

In out proposed framework, we divided the architecture into different layers The Edge and the cloud applications are developed in Microsoft Visual studio 2017 developer version. Each layer can define as follows:

A. APIs Layer: All the requests from ndnSIM will hit the Web API (Microsoft ASP.NET Web API 2.0) layer of the edge device. This API layer will provide a kind of interface to the requests coming from ndnSIM. The API layer will then forward all the requests to the services layer. For APIs we are using IIS (which stands for Internet Information Services or Internet Information Server). It will host websites, web applications and services needed by users or developers.

B. Services Layer: service layer will accept all the request from the API layer. The service layer will contain all the business / research related logics of the services to be performed. In our prototype, we just mentioned services like Crypto Services (to perform some crypto related services), Media Services (such as audio/video conversion etc.), Document services (such as pdf/ work converter etc.) IoT services and so on. Again, we say that it is a prototype implementation and the framework is open to implement any kind of service in the services layer.

C. Data Access Layer: The code modules in data access layer will be triggered based on the decisions made in the services layers such as whether to store the result locally or to perform computation on cloud. The data access layer will then store the results locally or will sent to the cloud subject to the application requirement and edge policies. For storing the results locally, we are using an Microsoft Entity Framework (a communication framework between database and application) and SQL database.

D. Component layer: In the component we are providing those code modules that are shared among different layers such as helpers (configuration helper, email helper and logging module) and extensions etc. This layer is also open to contribution and to add any kind of relevant helper. In this layer, user will be provided with an interface to configure the network directly instead of going to the specific portion of the network. Email helper will provide email services to the administrators and the administrators will be informed about all the activity of edge with users and cloud. The logging module is used to understand the flow of processes in the code. Therefore, we are providing a logging module in the component layer to ease the effort. Currently this logging module have two types Debug and Exception, in future we will add more types such Info and Warning Logs (this logging module is also open to contribution).

E. Model Layer: The model layer is used in all layers of our proposed framework and works as a carrier of information from one layer to other. Specifically speaking it is transforming the actual data in the objects between layers.

F. Unit Test Layer: We also provide a Unit test layer in order to ease the work of end users. That means if someone wants to make a new service or to change some services, then the Unit Test Layer will perform the testing of such service. User don’t need to go to browser or to start all over again from ndnSIM. The process will not be repeated from the beginning in order to test the newly added service. However, it will be checked directly from the Test Layer. For instance, the user doesn’t need to start from the consumer application for checking a service.

G. Web layer: The web layer is showing a graphical user interface for real time data analysis. For instance, it will show the real statistics of requests from end users to edge and from edge to the cloud. The statistics will be the latency of requests and services between all these three modules and it will show real time graphs (The work about graphs is still under development and it is open for the contribution).

3. Cloud

The Cloud application will comprise of all the layers except the Web layer and the subpart of the data access layer. Since we don’t need to send data more farther than the cloud.

4. Evaluation

The authors built their prototype using Desktop PC of 16 GB RAM, corei-7, 4710HQ-CPU and 2,5GHZ core. ndnSIM is running on Linux using VMware and is one hope distance from the edge PC. For the cloud we also used a Desktop PC of the aforementioned specifications. However, we intended to purchase an Azure cloud for our test bed which will be original cloud for cloud-based applications. For the testing proposes and beta version of our framework we used a Desktop PC of high specifications. We find that our system meets our goal and can accommodate the traffic. NDN is simulated in the ndnSIM simulator while Edge and Cloud are implemented at Microsoft Visual Studio 2017. For the database we use SQL 2017 developer version at the Edge nodes. Moreover, we also used Internet Information Services (IIS) in order to host websites, web applications and services needed by users or developers such as APIs and Web Interfaces.

Required Skills

NDN: ndnSIM EDGE & Cloud: ASP.NET, Entity Framework, SQL, MongoDB, Razor, JavaScript, and jQuery


NDN: Edge: Cloud:


Muhammad Atif Ur Rehman: Rehmat Ullah:

You can’t perform that action at this time.