Skip to content

Commit

Permalink
doc
Browse files Browse the repository at this point in the history
  • Loading branch information
montyvesselinov committed Dec 29, 2018
1 parent 0ebfa59 commit f952186
Show file tree
Hide file tree
Showing 68 changed files with 3,214 additions and 169 deletions.
2 changes: 1 addition & 1 deletion MADS.md
Expand Up @@ -10,7 +10,7 @@ MADS can execute a wide range of data- and model-based analyses:
* Uncertainty Quantification
* Model Selection and Model Averaging
* Model Reduction and Surrogate Modeling
* Machine Learning and Blind Source Separation
* Machine Learning (e.g. Blind Source Separation, Source Identification, Feature Extraction, etc.)
* Decision Analysis and Support

MADS has been tested to perform HPC simulations on a wide-range multi-processor clusters and parallel environments (Moab, Slurm, etc.).
Expand Down
2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -21,7 +21,7 @@ MADS can execute a wide range of data- and model-based analyses:
* Uncertainty Quantification
* Model Selection and Model Averaging
* Model Reduction and Surrogate Modeling
* Machine Learning and Blind Source Separation
* Machine Learning (e.g. Blind Source Separation, Source Identification, Feature Extraction, etc.)
* Decision Analysis and Support

MADS has been tested to perform HPC simulations on a wide-range multi-processor clusters and parallel environments (Moab, Slurm, etc.).
Expand Down
1 change: 1 addition & 0 deletions docs/Examples/contaminant_source_identification/index.md
1 change: 1 addition & 0 deletions docs/Examples/machine_learning/index.md
2 changes: 1 addition & 1 deletion examples/bigdt/source_termination.md
Expand Up @@ -75,7 +75,7 @@ The set of variances is

$$ { σ^2: \frac{σ^2_0}{10^h} ≤ σ^2 ≤ 10^h σ^2_0 } $$

where $$σ^2_0$$ is the nominal variance (500; the nominal standard deviation $$σ_0$$ is ~22) and $$h$$ is the horizon of uncertainty.
where $σ^2_0$ is the nominal variance (500; the nominal standard deviation $σ_0$ is ~22) and $h$ is the horizon of uncertainty.
As the index increases, these sets become bigger, allowing for more possibilities.

### Robustness
Expand Down
126 changes: 63 additions & 63 deletions examples/blind_source_separation/blind_source_separation.jl
Expand Up @@ -7,76 +7,76 @@ R=1
@info("Reconstruction of random signals ...")
Random.seed!(2015)
nk = 3
# s1 = rand(100)
# s2 = rand(100)
# s3 = rand(100)
# S = [s1 s2 s3]
# Mads.plotseries(S, "rand_original_signals.svg", title="Original signals", name="Signal", combined=false)
# H = [[1,1,1] [0,2,1] [1,0,2] [1,2,0]]
# X = S * H
# Mads.plotseries(X, "rand_mixed_observations.svg", title="Mixed observations", name="Observation", combined=false)
# info("Reconstruction of random signals using NMF ...")
# Wnmf, Hnmf, pnmf = Mads.NMFm(X, nk; retries=R)
# Mads.plotseries(Wnmf, "rand_unmixed_signals_nmf.svg", title="Unmixed signals", name="Signal", combined=false)
s1 = rand(100)
s2 = rand(100)
s3 = rand(100)
S = [s1 s2 s3]
Mads.plotseries(S, "rand_original_signals.svg", title="Original signals", name="Signal", combined=false)
H = [[1,1,1] [0,2,1] [1,0,2] [1,2,0]]
X = S * H
Mads.plotseries(X, "rand_mixed_observations.svg", title="Mixed observations", name="Observation", combined=false)
info("Reconstruction of random signals using NMF ...")
Wnmf, Hnmf, pnmf = Mads.NMFm(X, nk; retries=R)
Mads.plotseries(Wnmf, "rand_unmixed_signals_nmf.svg", title="Unmixed signals", name="Signal", combined=false)

# info("Reconstruction of random signals using JuMP/ipopt ...")
# Wipopt, Hipopt, pipopt = Mads.NMFipopt(X, nk, R)
# Mads.plotseries(Wipopt, "rand_unmixed_signals_ipopt.svg", title="Unmixed signals", name="Signal", combined=false)
info("Reconstruction of random signals using JuMP/ipopt ...")
Wipopt, Hipopt, pipopt = Mads.NMFipopt(X, nk, R)
Mads.plotseries(Wipopt, "rand_unmixed_signals_ipopt.svg", title="Unmixed signals", name="Signal", combined=false)

# info("Reconstruction of random signals using Mads LM ...")
# Wlm, Hlm, plm = Mads.MFlm(X, nk)
# Mads.plotseries(Wlm, "rand_unmixed_signals_nmflm.svg", title="Unmixed signals", name="Signal", combined=false)
info("Reconstruction of random signals using Mads LM ...")
Wlm, Hlm, plm = Mads.MFlm(X, nk)
Mads.plotseries(Wlm, "rand_unmixed_signals_nmflm.svg", title="Unmixed signals", name="Signal", combined=false)

# info("Reconstruction of sin signals ...")
# Random.seed!(2015)
# nk = 3
# s1 = (sin.(0.05:0.05:5)+1)/2
# s2 = (sin.(0.3:0.3:30)+1)/2
# s3 = (sin.(0.2:0.2:20)+1)/2
# S = [s1 s2 s3]
# Mads.plotseries(S, "sin_original_signals.svg", title="Original signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
# H = [[1,1,1] [0,2,1] [1,0,2] [1,2,0]]
# X = S * H
# Mads.plotseries(X, "sin_mixed_observations.svg", title="Mixed observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
# info("Reconstruction of sin signals using NMF ...")
# Wnmf, Hnmf, pnmf = Mads.NMFm(X, nk, retries=R)
# Mads.plotseries(Wnmf, "sin_unmixed_signals_nmf.svg", title="Unmixed signals", name="Signal", combined=true)
# Mads.plotseries(Wnmf * Hnmf, "sin_reproduced_observations_nmf.svg", title="Reproduced observations", name="Signal", combined=true)
info("Reconstruction of sin signals ...")
Random.seed!(2015)
nk = 3
s1 = (sin.(0.05:0.05:5)+1)/2
s2 = (sin.(0.3:0.3:30)+1)/2
s3 = (sin.(0.2:0.2:20)+1)/2
S = [s1 s2 s3]
Mads.plotseries(S, "sin_original_signals.svg", title="Original signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
H = [[1,1,1] [0,2,1] [1,0,2] [1,2,0]]
X = S * H
Mads.plotseries(X, "sin_mixed_observations.svg", title="Mixed observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
info("Reconstruction of sin signals using NMF ...")
Wnmf, Hnmf, pnmf = Mads.NMFm(X, nk, retries=R)
Mads.plotseries(Wnmf, "sin_unmixed_signals_nmf.svg", title="Unmixed signals", name="Signal", combined=true)
Mads.plotseries(Wnmf * Hnmf, "sin_reproduced_observations_nmf.svg", title="Reproduced observations", name="Signal", combined=true)

# info("Reconstruction of sin signals using JuMP/ipopt ...")
# Wipopt, Hipopt, pipopt = Mads.NMFipopt(X, nk, R)
# Mads.plotseries(Wipopt, "sin_unmixed_signals_ipopt.svg", title="Unmixed signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
# Mads.plotseries(Wipopt * Hipopt, "sin_reproduced_observations_ipopt.svg", title="Reproduced observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
info("Reconstruction of sin signals using JuMP/ipopt ...")
Wipopt, Hipopt, pipopt = Mads.NMFipopt(X, nk, R)
Mads.plotseries(Wipopt, "sin_unmixed_signals_ipopt.svg", title="Unmixed signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
Mads.plotseries(Wipopt * Hipopt, "sin_reproduced_observations_ipopt.svg", title="Reproduced observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)

# info("Reconstruction of sin signals using LM ...")
# Wlm, Hlm, plm = Mads.MFlm(X, nk)
# Mads.plotseries(Wlm, "sin_unmixed_signals_nmflm.svg", title="Unmixed signals", name="Signal", combined=true)
# Mads.plotseries(Wlm * Hlm, "sin_reproduced_observations_nmflm.svg", title="Reproduced observations", name="Signal", combined=true)
info("Reconstruction of sin signals using LM ...")
Wlm, Hlm, plm = Mads.MFlm(X, nk)
Mads.plotseries(Wlm, "sin_unmixed_signals_nmflm.svg", title="Unmixed signals", name="Signal", combined=true)
Mads.plotseries(Wlm * Hlm, "sin_reproduced_observations_nmflm.svg", title="Reproduced observations", name="Signal", combined=true)

# info("Reconstruction of sin/rand signals ...")
# Random.seed!(2015)
# nk = 3
# s1 = (sin.(0.05:0.05:5)+1)/2
# s2 = (sin.(0.3:0.3:30)+1)/2
# s3 = rand(100)
# S = [s1 s2 s3]
# Mads.plotseries(S, "sig_original_signals.svg", title="Original signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
# Mads.plotseries(S, "sig_original_signals.png", title="Original signals", name="Signal", combined=true)
# H = [[1,1,1] [0,2,1] [1,0,2] [1,2,0]]
# X = S * H
# Mads.plotseries(X, "sig_mixed_observations.svg", title="Mixed observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
# Mads.plotseries(X, "sig_mixed_observations.png", title="Mixed observations", name="Signal", combined=true)
# info("Reconstruction of sin/rand signals using NMF ...")
# Wnmf, Hnmf, pnmf = Mads.NMFm(X, nk, retries=1)
# Mads.plotseries(Wnmf, "sig_unmixed_signals_nmf.svg", title="Unmixed signals", name="Signal", combined=true)
# Mads.plotseries(Wnmf, "sig_unmixed_signals_nmf.png", title="Unmixed signals", name="Signal", combined=true)
# Mads.plotseries(Wnmf * Hnmf, "sig_reproduced_observations_nmf.svg", title="Reproduced observations", name="Signal", combined=true)
info("Reconstruction of sin/rand signals ...")
Random.seed!(2015)
nk = 3
s1 = (sin.(0.05:0.05:5)+1)/2
s2 = (sin.(0.3:0.3:30)+1)/2
s3 = rand(100)
S = [s1 s2 s3]
Mads.plotseries(S, "sig_original_signals.svg", title="Original signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
Mads.plotseries(S, "sig_original_signals.png", title="Original signals", name="Signal", combined=true)
H = [[1,1,1] [0,2,1] [1,0,2] [1,2,0]]
X = S * H
Mads.plotseries(X, "sig_mixed_observations.svg", title="Mixed observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
Mads.plotseries(X, "sig_mixed_observations.png", title="Mixed observations", name="Signal", combined=true)
info("Reconstruction of sin/rand signals using NMF ...")
Wnmf, Hnmf, pnmf = Mads.NMFm(X, nk, retries=1)
Mads.plotseries(Wnmf, "sig_unmixed_signals_nmf.svg", title="Unmixed signals", name="Signal", combined=true)
Mads.plotseries(Wnmf, "sig_unmixed_signals_nmf.png", title="Unmixed signals", name="Signal", combined=true)
Mads.plotseries(Wnmf * Hnmf, "sig_reproduced_observations_nmf.svg", title="Reproduced observations", name="Signal", combined=true)

# info("Reconstruction of sin/rand signals using JuMP/ipopt ...")
# Wipopt, Hipopt, pipopt = Mads.NMFipopt(X, nk, 1)
# Mads.plotseries(Wipopt, "sig_unmixed_signals_ipopt.svg", title="Unmixed signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
# Mads.plotseries(Wipopt, "sig_unmixed_signals_ipopt.png", title="Unmixed signals", name="Signal", combined=true)
# Mads.plotseries(Wipopt * Hipopt, "sig_reproduced_observations_ipopt.svg", title="Reproduced observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
info("Reconstruction of sin/rand signals using JuMP/ipopt ...")
Wipopt, Hipopt, pipopt = Mads.NMFipopt(X, nk, 1)
Mads.plotseries(Wipopt, "sig_unmixed_signals_ipopt.svg", title="Unmixed signals", name="Signal", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)
Mads.plotseries(Wipopt, "sig_unmixed_signals_ipopt.png", title="Unmixed signals", name="Signal", combined=true)
Mads.plotseries(Wipopt * Hipopt, "sig_reproduced_observations_ipopt.svg", title="Reproduced observations", name="Observation", combined=true, hsize=8Gadfly.inch, vsize=4Gadfly.inch)

@info("Reconstruction of sin/rand disturbance signal ...")
Random.seed!(2015)
Expand Down
22 changes: 20 additions & 2 deletions examples/machine_learning/machine_learning.md
Expand Up @@ -3,7 +3,7 @@
Unsupervised Machine Learning methods are powerful data-analytics tools capable of extracting important features hidden (latent) in large datasets without any prior information.
The physical interpretation of the extracted features is done *a posteriori* by subject-matter experts.

In contrast, supervised ML methods are trained based on large labeled datasets; the labeling is performed a priori by subject-matter experts.
In contrast, supervised ML methods are trained based on large labeled datasets; the labeling is performed *a priori* by subject-matter experts.
The process of deep ML commonly includes both unsupervised and supervised techniques [LeCun, Bengio, and Hinton 2015](https://www.nature.com/articles/nature14539) where unsupervised ML are applied to facilitate the process of data labeling.

The integration of large datasets, powerful computational capabilities, and affordable data storage has resulted in the widespread use of Machine Learning in science, technology, and industry.
Expand All @@ -12,6 +12,24 @@ Recently we have developed a novel unsupervised Machine Learning methods.
The methods are based on Matrix/Tensor Factorization coupled with sparsity and nonnegativity constraints.
The method reveals the temporal and spatial footprints of the extracted features.

$$ { 1/2 ||X-G \bigotimes_1 A_1 \bigotimes_2 A_2 \dots \bigotimes_n A_n ||_F^2 } $$
### Examples

* [Blind Source Separation (i.e. Feature Extraction)](../../Examples/blind_source_separation/index.html)
* [Contaminant Source Identification](../Examples/contaminant_source_identification/index.html)

### Tensor Factorization

A novel unsupervised ML based on Tensor Factorization (TF) coupled with sparsity and nonnegativity constraints has been applied to extract the temporal and spatial footprints of the features in multi-dimensional datasets in the form of multi-way arrays or tensors.
The factorization of a given tensor $X$ is typically performed by minimization of the Frobenius norm:

$$ { \frac{1}{2} ||X-G \otimes_1 A_1 \otimes_2 A_2 \dots \otimes_n A_n ||_F^2 } $$

where:

* $N$ is the dimensionality of the tensor X, G is a mixing “core” tensor
* $A_1,A_2,\dots,A_N$ are “feature” factors (in the form of vectors or matrices)
* $\otimes$ is a tensor product applied to fold-in factors $A_1,A_2,\dots,A_N$ in each of the tensor dimensions

The product $G \otimes_1 A_1 \otimes_2 A_2 \dots \otimes_n A_n$ is an estimate of $X$ ($X_est$).


9 changes: 6 additions & 3 deletions mkdocs.yml
Expand Up @@ -21,6 +21,8 @@ pages:
- Decision Analysis: Examples/bigdt/source_termination/index.md
- Information Gap Analysis: Examples/infogap/index.md
- Bayesian Sampling: Examples/bayesian_sampling/index.md
- Machine Learning: Examples/machine_learning/index.md
- Contaminant Source Identification: Examples/contaminant_source_identification/index.md
- Blind Source Separation: Examples/blind_source_separation/index.md
- Contaminant Transport: Examples/contamination/index.md
- ODE Analysis: Examples/ode/index.md
Expand Down Expand Up @@ -55,9 +57,10 @@ markdown_extensions:
# enable_dollar_delimiter: True #for use of inline $..$

extra_javascript:
- https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML
# - https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML
# - assets/mathjaxhelper.js
- https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-AMS_HTML
# - https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML
# - https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML
# - assets/mathjaxhelper.js

extra:
palette:
Expand Down
26 changes: 25 additions & 1 deletion site/404.html
Expand Up @@ -338,6 +338,30 @@



<li class="md-nav__item">
<a href="/Mads.jl/Examples/machine_learning/" title="Machine Learning" class="md-nav__link">
Machine Learning
</a>
</li>







<li class="md-nav__item">
<a href="/Mads.jl/Examples/contaminant_source_identification/" title="Contaminant Source Identification" class="md-nav__link">
Contaminant Source Identification
</a>
</li>







<li class="md-nav__item">
<a href="/Mads.jl/Examples/blind_source_separation/" title="Blind Source Separation" class="md-nav__link">
Blind Source Separation
Expand Down Expand Up @@ -651,7 +675,7 @@ <h1>404 - Not found</h1>

<script>app.initialize({version:"1.0.4",url:{base:"/Mads.jl"}})</script>

<script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-AMS_HTML"></script>



Expand Down
30 changes: 27 additions & 3 deletions site/EXAMPLES/bayesian_sampling/index.html
Expand Up @@ -461,6 +461,30 @@



<li class="md-nav__item">
<a href="../machine_learning/" title="Machine Learning" class="md-nav__link">
Machine Learning
</a>
</li>







<li class="md-nav__item">
<a href="../contaminant_source_identification/" title="Contaminant Source Identification" class="md-nav__link">
Contaminant Source Identification
</a>
</li>







<li class="md-nav__item">
<a href="../blind_source_separation/" title="Blind Source Separation" class="md-nav__link">
Blind Source Separation
Expand Down Expand Up @@ -942,13 +966,13 @@ <h4 id="individual-spaghetti-plots_1">Individual spaghetti plots</h4>
</a>


<a href="../blind_source_separation/" title="Blind Source Separation" class="md-flex md-footer-nav__link md-footer-nav__link--next" rel="next">
<a href="../machine_learning/" title="Machine Learning" class="md-flex md-footer-nav__link md-footer-nav__link--next" rel="next">
<div class="md-flex__cell md-flex__cell--stretch md-footer-nav__title">
<span class="md-flex__ellipsis">
<span class="md-footer-nav__direction">
Next
</span>
Blind Source Separation
Machine Learning
</span>
</div>
<div class="md-flex__cell md-flex__cell--shrink">
Expand Down Expand Up @@ -982,7 +1006,7 @@ <h4 id="individual-spaghetti-plots_1">Individual spaghetti plots</h4>

<script>app.initialize({version:"1.0.4",url:{base:"../.."}})</script>

<script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-AMS_HTML"></script>



Expand Down
26 changes: 25 additions & 1 deletion site/EXAMPLES/bigdt/source_termination/index.html
Expand Up @@ -483,6 +483,30 @@



<li class="md-nav__item">
<a href="../../machine_learning/" title="Machine Learning" class="md-nav__link">
Machine Learning
</a>
</li>







<li class="md-nav__item">
<a href="../../contaminant_source_identification/" title="Contaminant Source Identification" class="md-nav__link">
Contaminant Source Identification
</a>
</li>







<li class="md-nav__item">
<a href="../../blind_source_separation/" title="Blind Source Separation" class="md-nav__link">
Blind Source Separation
Expand Down Expand Up @@ -1077,7 +1101,7 @@ <h3 id="model">Model</h3>

<script>app.initialize({version:"1.0.4",url:{base:"../../.."}})</script>

<script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-AMS_HTML"></script>



Expand Down

0 comments on commit f952186

Please sign in to comment.