##WS-DREAM
WS-DREAM is a package of open source-code and datasets to benchmark QoS-driven services research, especially on Web service recommendation.
With both datasets and source code publicly released, WS-DREAM repository would allow ease of reproducing the existing approaches, and potentially inspires more research efforts in this area. Specifically, for future research on QoS prediction of Web services, you do not need to write your own program from scratch. The WS-DREAM framework can be easily extended to new implementations. This is exactly the goal of maintaining this repository.
-
Zibin Zheng, Yilei Zhang, Michael R. Lyu, "Investigating QoS of Real-World Web Services," IEEE Trans. Services Computing (TSC), 2014.
-
Jieming Zhu, Pinjia He, Zibin Zheng, Michael R. Lyu, "Towards Online, Accurate, and Scalable QoS Prediction for Runtime Service Adaptation," in Proc. of IEEE International Conference on Distributed Computing Systems (ICDCS), 2014.
-
Zibin Zheng, Michael R. Lyu, "Collaborative Reliability Prediction of Service-Oriented Systems," in Proc. of ACM/IEEE International Conference on Software Engineering (ICSE), 2010. [ACM SIGSOFT Distinguished Paper Award]
-
Zibin Zheng, Yilei Zhang, Michael R. Lyu, "Distributed QoS Evaluation for Real-World Web Services," in Proc. of IEEE International Conference on Web Services (ICWS), 2010. [Best Student Paper Award]
-
Zibin Zheng, Hao Ma, Michael R. Lyu, Irwin King, "WSRec: A Collaborative Filtering based Web Service Recommender System," in Proc. of IEEE International Conference on Web Services (ICWS), 2009.
-
Zibin Zheng, Michael R. Lyu, "WS-DREAM: A distributed Reliability Assessment Mechanism for Web Services," in Proc. of IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2008.
##Related Links
-
A list of papers that use or cite WS-DREAM: http://wsdream.github.io/bibliography
-
WS-DREAM open-source code: http://wsdream.github.io/code
-
WS-DREAM open datasets: http://wsdream.github.io/dataset
##Code Archive
####Baseline approaches
- UMEAN: [benchmarks/baseline/UMEAN]
- IMEAN: [benchmarks/baseline/UMEAN]
####Neighbourhood-based approaches
- UIPCC: [benchmarks/neighbourhood-based/UIPCC]
- ADF: [benchmarks/neighbourhood-based/ADF]
- NRCF: [benchmarks/neighbourhood-based/NRCF]
####Model-based approaches
- PMF (a.k.a. Regularized SVD): [benchmarks/model-based/PMF]
- NMF: [benchmarks/model-based/NMF]
- BiasedMF: [benchmarks/model-based/BiasedMF]
- LN-LFM: [benchmarks/model-based/LN_LFM]
####Hybrid approaches
- CloudPred: [benchmarks/hybrid/CloudPred]
- EMF: [benchmarks/hybrid/EMF]
- NIMF: [benchmarks/hybrid/NIMF]
####Location-aware approaches
- RegionKNN: [benchmarks/location-aware/RegionKNN]
- LACF: [benchmarks/location-aware/LACF]
- LBR: [benchmarks/location-aware/LBR]
- HMF: [benchmarks/location-aware/HMF]
- LoRec: [benchmarks/location-aware/LoRec]
####Time-aware approaches
- Average: [benchmarks/time-aware/Baseline]
- UIPCC: [benchmarks/time-aware/UIPCC]
- PMF: [benchmarks/time-aware/PMF]
- TF: [benchmarks/time-aware/TF]
- WSPred: [benchmarks/time-aware/WSPred]
- CLUS: [benchmarks/time-aware/CLUS]
- NTF: [benchmarks/time-aware/NTF]
- TD-WSRec: [benchmarks/time-aware/TD_WSRec]
####Online prediction approaches
- AMF: [benchmarks/online/AMF]
- OPred: [benchmarks/online/OPred]
####Ranking-based approaches
##Dependencies
- Python 2.7 (https://www.python.org)
- Cython 0.20.1 (http://cython.org)
- numpy 1.8.1 (http://www.scipy.org)
- scipy 0.13.3 (http://www.scipy.org)
- AMF (https://github.com/wsdream/AMF)
- PPCF (https://github.com/wsdream/PPCF)
##Usage
The algorithms in WS-DREAM are mostly implemented in C++ and further wrapped up as a python package for common use.
- Install
wsdream
package
Download the repo at: https://github.com/wsdream/WS-DREAM/tarball/master,
then install the package python setup.py install --user
.
- Change directory
cd
to"benchmarks/"
, and configure the parameters in benchmark scripts
For example, in run_rt.py
, you can config the 'parallelMode': True
if you are running a multi-core machine. You can also set 'rounds': 1
for testing, which can make the execution finish soon.
-
Read
"readme.txt"
for each appraoch, and execute the provided benchmark scripts$ python run_rt.py $ python run_tp.py
-
Check the evaluation results in
"result/"
directory. Note that the repository has maintained the results evaluated on WS-DREAM datasets, which are ready for immediate use.
IF YOU USE THIS PACKAGE IN ANY PUBLISHED RESEARCH, PLEASE KINDLY CITE THE FOLLOWING PAPER:
- WS-DREAM: A Package of Open Source-Code and Datasets to Benchmark QoS Prediction Approaches of Web Services. Available at: https://github.com/wsdream.
Great thanks to WS-DREAM contributors:
- Jieming Zhu, Postdoc Fellow, The Chinese University of Hong Kong, Hong Kong (Coordinator)
- Zibin Zheng, Associate Professor, Sun Yat-sen University, China (for UIPCC)
- Pinjia He, PhD Student, The Chinese University of Hong Kong, Hong Kong (for HMF)
- Yuwen Xiong, Visiting Student from Zhejiang University, China (for TF, NTF, WSPred, OPred, BiasedMF, SVD++)
- Yifei Lu, Visiting Student from Zhejiang University, China (for ADF, T-WSRec)
##Feedback For bugs and feedback, please post to our issue page. For any other enquires, please drop an email to our team (wsdream.maillist@gmail.com).
##License The MIT License (MIT)
Copyright © 2016, WS-DREAM, CUHK