Note
This results are based on the paper published here [CIT2020-BARRACHINA]. Information about the network models and the dataset can be found there.
This results can be replicated by running the following code (changing the input parameters when it corresponds):
from cvnn.montecarlo import run_gaussian_dataset_montecarlo
run_gaussian_dataset_montecarlo(iterations=30, m=10000, n=128, param_list=None,
epochs=300, batch_size=100, display_freq=1, optimizer='adam',
shape_raw=None, activation='cart_relu', debug=False,
polar=False, do_all=True, dropout=0.5)
Note
Results will be differents to those published because of the following reasons (Fixed on the ArXiv version):
- Since version 0.2.89 the default real mlp model changed using a new definition that will be published in a new article.
- Since version 0.3.48 cvnn started using TensorFlow optimizer which averages gradients by it's batch size according to this post. This issue was raised officially.
- Default optimizer changed to adam.
- As the dataset is generated randomly. This means two independent runs of
run_gaussian_dataset_montecarlo
may not have the same confidence intervals and results. - Disclaimer: All comparisons with a parameter variation was done with the same dataset.
- As the dataset is generated randomly. This means two independent runs of
Simulation Results
several_coef_correl base_case_type_a_2hl base_case_type_a_1hl
Method Documentation
- CIT2020-BARRACHINA
Jose Agustin Barrachina, Chenfang Ren, Christele Morisseau, Gilles Vieillard, Jean-Philippe Ovarlez “Complex-Valued vs. Real-Valued Neural Networks for Classification Perspectives: An Example on Non-Circular Data” arXiv:2009.08340 ML Stat, Sep. 2020. Available: https://arxiv.org/abs/2009.08340.