Skip to content

This open source benchmarking framework allows you to build your own P2P learning algorithm and evaluate it in a simulated but realistic -- where you can model message delay, drop or churn -- networked environment. Moreover it contains the prototype implementations of some well-known machine learning algorithms like SVM and Logistic Regression.

ormandi/Gossip-Learning-Framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gossip Learning Framework (GoLF)

This open source benchmarking framework allows you to build your own P2P learning algorithm and evaluate it in a simulated but realistic -- where you can model message delay, drop or churn -- networked environment. Moreover it contains the prototype implementations of some well-known machine learning algorithms like SVM and Logistic Regression. (More will be coming soon.)

The project is related to our academic research and it is partially supported by the Future and Emerging Technologies programme FP7-COSI-ICT of the European Commission through project QLectives (grant no.: 231200). Some related publications can be found on our personal homepages (here and here) and on arXiv.

Getting Started

This framework includes some predefined learning scenarios based on the prototype implementation of the machine learning algorithms and the well-known Iris learning database. To play with them you have to perform the following steps:

  • getting the source: First you have to download the source code of the framework. Probably the easiest way to do that is cloning this git repository by typing git clone git://github.com/RobertOrmandi/Gossip-Learning-Framework.git. Additional possibilities are to download as zip archive or as tar.gz archive.

  • building it: The building process is supported with ant. To create a jar you have to type ant in the root directory of the project. This will produce gossipLearning.jar in the bin directory of the project. (All of the libraries which are necessary for building or running the project are included in the lib directory of the project.)

  • running a predefined simulation: To run a simulation applying one of the predefined scenarios on the Iris dataset you have to type the following code snippet: res/script/run.sh training_db evaluation_db 100 scenario result (assuming a standard UNIX environment with java and gnuplot installed).The parameters of the run.sh are pretty intuitive and you can find examples in the package. The first two parameters refer to the training and evaluation datasets, respectively, presented in SVMLight format. You can use the res/db/iris_setosa_versicolor_train.dat and res/db/iris_setosa_versicolor_eval.dat files respectively. The third parameter defines the number of iterations. The fourth one describes the simulation environment. Basically this is a Peersim configuration file template (configuration file with some variables that are instantiated based on the used training set). Here you can use the res/config/no_failure_applying_more_learners_voting10.txt configuration file. The results are generated in the res/results directory given in the fifth parameter (it has to be created before the call of run.sh). Make sure to delete previously generated results before you rerun the simulation! In the res directory of the project you can find additional training datasets (db subdirectory) and other configuration templates (config subdirectory).

  • understanding the results: The result graphs can be found in the res/results/ directory. It should be similar to this figure. Each curve belongs to a certain type of learning algorithm (see labels) and each point of the curves corresponds to a point in time (see label of x-axis). Each point shows the averaged 0-1 error over the different machine learning models stored by the nodes of the network measured on a separate (i.e. not known by the learning algorithm) evaluation set. As you can see, each line drops down after a certain point in time which means each algorithm converges.

This is just the tip of the iceberg since the framework provides an API which makes it extensible, i.e. you can implement new learning algorithms or protocols. Or you can define other network scenarios using the configuration mechanism of Peersim.

Further Reading

Since the GoLF is built on the top of Peersim, for the deeper understanding of the underlying mechanism you should be familiar with Peersim. You should understand the following tutorials: this and this. This is also necessary for understanding the configuration files of GoLF.

To develop a new algorithm or protocol you have to know the details of the Gossip Learning Framework. This was described in this paper and a slightly simplified version can be found in the wiki of the project.

You are almost done. But before you start development be sure you understand the inner design concepts of the implementation of GoLF. You can read about this part of the project wiki where a class diagram is also shown.

To set up your development environment you should read our step-by-step guide which can be found here specifically for Eclipse IDE.

Citation

If you use GoLF in your scientific work or just you want to refer to GoLF somewhere, please cite the following paper. The full citation is

@inproceedings{ormandi2011asynchronP2PDM,
  author = {R{\'o}bert Orm{\'a}ndi and Istv{\'a}n Heged\H{u}s and M{\'a}rk Jelasity},
  title = {Asynchronous Peer-to-Peer Data Mining with Stochastic Gradient Descent},
  booktitle = {17th International European Conference on Parallel and Distributed Computing (Euro-Par 2011)},
  year = {2011},
  pages = {528-540},
  series = {Lecture Notes in Computer Science},
  volume = {6852},
  publisher = {Springer-Verlag},
  ee = {http://dx.doi.org/10.1007/978-3-642-23400-2_49},
  bibsource = {http://www.inf.u-szeged.hu/~ormandi/papers/ormandi2011asynchronP2PDM.bib}
}

About

This open source benchmarking framework allows you to build your own P2P learning algorithm and evaluate it in a simulated but realistic -- where you can model message delay, drop or churn -- networked environment. Moreover it contains the prototype implementations of some well-known machine learning algorithms like SVM and Logistic Regression.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published